RasPi Camera Power Consumption on Spirit Rover

Well, since ironing out a few AVR upload/connection issues, I am now able to start using the serial link to record some data and analyze power consumption.  The following plot shows the measured Amps pulled, and the Voltage that the battery is at, multiplied together to show the Power consumption estimate.  See if you can tell when the camera is recording video…

 

Screen Shot 2018-02-04 at 7.07.16 PM

The rover was connected to “shore power” via the USB to my laptop for the serial connection (lower right in the image) and I was connected via WiFi to the rasPi via VNC where you can see the streaming video from the camera showing the Super Bowl LII Game.  (I was multitasking on my weekend!)

Screen Shot 2018-02-04 at 6.52.59 PM

I find it interesting that the camera consumes 2x the power when streaming video.  I wonder how bad it is when it does a panorama stitch…  more to come…

 

Advertisements

OpenCV on SpiritRover

OpenCV on SpiritRover

Well, I built and installed OpenCV 3.4.0, and wrote a nice utility for commanding the Spirit Rover Pan/Tilt head in Python from the Raspberry Pi, took a few images and used OpenCV on the Raspberry Pi to stitch them together.  Check out the result:

openCV_stitch_30deg_SpiritRoverOn the right were the original images, and the top left is the matched features found between the two images, connected by green lines.  Bottom left is the resulted stitched image.

The python code I used to do this I got from a post over on the PyImageSearch Blog by Adrian Rosebrock.

My useful handy SpiritRover PanTilt and Servo objects to make commanding the PanTilt head easy from Python on the Raspberry Pi over the i2c bus (the PIC is what is commanding the servos PWM signals, and it needs to be commanded via i2c) is:

''' This program is to explore the i2c bus on the SpiritRover Kit.

Original Source:  http://forum.plumgeek.com/viewtopic.php?f=18&t=6575

The SpiritRover i2c bus looks like:

pi@spirit_rover:~ $ sudo i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- 32 -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- 6b -- -- -- --
70: -- -- -- -- -- -- -- --

0x32 is the PIC microprocessor.

'''
from __future__ import print_function
import smbus
import time


class ServoCal(object):

    def __init__(self, minVal=45, center=90, maxVal=135):
        self.min = minVal
        self.center = center
        self.max = maxVal

    def setCalArray(self, cal_array):
        self.min = cal_array[0]
        self.center = cal_array[1]
        self.max = cal_array[2]


class Servo(object):

    def __init__(self, register, i2c_bus=1, PIC_address=0x32):
        self.i2c_bus = smbus.SMBus(i2c_bus)
        self.PIC_address = PIC_address
        self.cal = ServoCal()
        self.register = register
        self.state = None
        self.write_err_count = 0

    def writeByte(self, value):
        self.write_err_count = 0
        try:
            self.i2c_bus.write_byte_data(self.PIC_address,
                                         self.register, value)
            return 0
        except:
            self.write_err_count += 1
            self.writeByte(value)
            return self.write_err_count * -1

    def limitCheck(self, cmd):
        ''' clamp specified location value to limit
        if outside specified axis limits '''

        if cmd > self.cal.max:
            cmd = self.cal.max
        elif cmd < self.cal.min:
            cmd = self.cal.min
        return cmd

    def moveAbsolute(self, cmd):
        ''' Move to specific value w/ limit check '''
        cmd = self.limitCheck(cmd)
        self.writeByte(cmd)
        self.state = cmd
        return 0

    def moveTo(self, cmd):
        ''' Move to cal-center-relative position '''
        cmd = self.cal.center + cmd
        self.moveAbsolute(cmd)
        return 0

    def moveDelta(self, delta):
        ''' Move relative to current position'''
        self.moveAbsolute(self.state + delta)
        return 0

    def center(self):
        ''' Center specified axis to cal value'''
        self.moveTo(0)
        return 0


class PanTilt(object):

    def __init__(self):
        self.pan = Servo(53)
        self.pan.cal.setCalArray([30, 90, 150])
        self.tilt = Servo(52)
        self.tilt.cal.setCalArray([20, 110, 150])
        self.center()

    def vecMoveAbsolute(self, cmd_vec):
        ''' move both axes to specified locations '''
        self.pan.moveAbsolute(cmd_vec[0])
        self.tilt.moveAbsolute(cmd_vec[1])
        return 0

    def vecMoveTo(self, cmd_vec):
        ''' move both axes to cal-center-relative positions '''
        self.pan.moveTo(cmd_vec[0])
        self.tilt.moveTo(cmd_vec[1])
        return 0

    def vecMoveDelta(self, cmd_vec):
        ''' Move both axes relative to current position'''
        self.pan.moveDelta(cmd_vec[0])
        self.tilt.moveDelta(cmd_vec[1])
        return 0

    def center(self, axis='both'):
        ''' Center all axes to cal center values'''
        if axis == 'both':
            self.pan.center()
            self.tilt.center()
        else:
            getattr(self, axis).center()
        return 0

    def fullScan(self, pan_delta=15, tilt_delta=15, pause=0.5):
        self.pan.moveAbsolute(self.pan.cal.min)
        self.tilt.moveAbsolute(self.tilt.cal.min)

        while True:
            if (self.pan.state <= self.pan.cal.min) or \
                    (self.pan.state >= self.pan.cal.max):
                pan_delta = -pan_delta
            if (self.tilt.state >= self.tilt.cal.max) and \
               (self.pan.state >= self.pan.cal.max):
                self.center()
                break
            elif (self.pan.state >= self.pan.cal.max) or \
                 (self.pan.state <= self.pan.cal.min):
                self.tilt.moveDelta(tilt_delta)
                self.pan.moveDelta(pan_delta)
            else:
                self.pan.moveDelta(pan_delta)
                time.sleep(pause)


if __name__ == "__main__":
    pt = PanTilt()
    pt.fullScan()

 

I intend to hybridize the OpenCV code from Adrian’s Blog and something like the PanTilt.fullScan() method above to automatically stitch a scene from the rover’s point of view over the entire field of regard of the pan/tilt servo head.  I’ll post the results soon, I hope.

Spirit Rover from PlumGeek

Spirit Rover from PlumGeek

So in December, I received my Spirit Rover from Plum Geek Robotics for supporting their Kickstarter Campaign.  This is a great kit for learning robotics, embedded computing, and micro-controllers.   Well done, Kevin & company at Plum Geek.

IMG_3555

Erik’s completed kit of the Plum Geek Robotics Spirit Rover

Between October 2000 and February 2004, I worked on the Entry, Descent, and Landing team for Spirit and Opportunity at the Jet Propulsion Laboratory (JPL), and I had to support Kevin’s kickstarter when I saw it.  I’m so elated to see the work we did at JPL being used to build hands-on kits to inspire future generations of roboticists.  Note that Opportunity is still kicking on Mars after sol 4970 after 45 kilometers of driving!  Go Oppy!

Back to Kevin’s kit, though.   For those of you who are building it, make sure you follow the instructions VERY carefully — read through them in their entirety before you start, and read each step completely before embarking upon doing that particular step.  There are subtle details that matter.

The pre-loaded software is an Arduino sketch that has three different modes, and the robot makes no use of the Raspberry Pi sandwiched inside the robot.  My robot came with a vanilla Raspbian Jessie install on a 16GB card, which I upgraded to a 128GB card with Raspbian Stretch.

I changed the default user (pi) password, and used raspi-config to set up SSH, VNC, I2C, SPI, and Serial interfaces, as well as enable the raspberry pi camera.

I want to be able to develop robot python scripts and C code with openCV, which uses the cmake build process.  I figured having the cmake-gui might be handy, so I installed it as well as following installation dependencies for OpenCV using apt-get.  I then grabbed the 3.4.0 tagged branch from git for OpenCV because the package manager for Raspbian Stretch only has opencv 2.4, and I wanted the latest.  So I then used cmake to build the source I obtained from git.  The process looks like this:

sudo apt-get update
sudo apt-get upgrade
sudo rpi-update
sudo reboot
sudo apt-get install build-essential git cmake pkg-config \
  libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev \
  libavcodec-dev libavformat-dev libswscale-dev \
  libv4l-dev libxvidcore-dev libx264-dev libgtk2.0-dev \
  libatlas-base-dev gfortran python2.7-dev python3-dev python-pip \
  python-numpy python-scipy python-picamera

cd ~
git clone https://github.com/opencv/opencv.git
cd opencv
git checkout 3.4.0
cd ~
git clone https://github.com/opencv/opencv_contrib.git
cd opencv_contrib
git checkout 3.4.0


cd ~/opencv
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE \
 -D CMAKE_INSTALL_PREFIX=/usr/local \
 -D INSTALL_C_EXAMPLES=OFF \
 -D INSTALL_PYTHON_EXAMPLES=ON \
 -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
 -D BUILD_EXAMPLES=ON ..
make -j3
sudo make install
sudo ldconfig

Then I tested it on the Spirit Rover using example code from here. The following is a scrot screen grab of a streaming moving coming from the Raspberry Pi camera into OpenCV and displaying it to the screen in an OpenCV window:

2018-01-28-142418_1360x768_scrot

Success! I had to add

camera.hflip=True
camera.vflip=True

to the example scripts at the link above to make the images properly oriented due to how the camera is mounted on the Pan/Tilt head of my Spirit Rover.

So far, so good!  Now to tackle i2c and spi comm to the other ICs on the custom SpiritRover board…  but that’ll be a later post.

 

 

“The first step in solving any problem…”

… is recognizing there is one.” The point of Will McAvoy’s rant from the opening of the HBO series The Newsroom Season 1, Episode 1, titled “We Just Decided To” is that without being informed, B.S. will run rampant and be treated as truth.  It’s worth watching, even though it’s a constructed moment from Aaron Sorkin:

A view from the tail-end of Gen-X

“Worst. Generation. Ever.” is a bit harsh… but that statement inflammatory though it may be should grab our attention because that view does come from a place of truth.   I am the tail-end of Generation X, born in the mid-1970s, WWII veterans were two generations above me — my grandparents’ generation.  Their parents watched the influence of radio, automobiles, and airplanes, while they and their offspring subsequently witnessed television, spaceflight, and products based on solid-state electronics (transistors).  My generation is the last generation to really remember what the world was like pre-internet, and even pre-computers-everywhere.  Typewriters were still ubiquitous, libraries had card catalogs instead of database software, mimeograph (“ditto”) machines were still used in schools before photocopiers/scanners replaced them, etc.   To do research or learn, it took planning, effort, and significant time.  Today, most information is at our fingertips every moment of every day thanks to wireless technology, computer miniaturization, and human/computer interface evolution.  McAvoy’s comments in the clip above are a statement about how we are using that technology, and how it exposes what the collective taste and behaviors of humanity are really like — both bad, and good.

To someone who grew up having to make a concerted effort to obtain and consume information in order to learn, the attitude when consuming information was to hold onto it — permanently internalize some nuggets of truth from what one was consuming because it takes effort to re-learn it given the infrastructure and commitment required.  To the Generation-Y/Millennial crowd, that mindset is replaced with one of the information always being available instantaneously, therefore the view is to not internalize but remember how to obtain the information.  These same people also complain when they have no wifi or cell signal — they have become dependent upon information infrastructure much more deeply in order to function even in mundane tasks, and that makes them more fragile to flaws or problems in that infrastructure.

In today’s world, there is ever-more noise added into the signal of quality information because there are fewer (if any) gate-keepers between anyone’s ideas and mass distribution, and its becoming harder to discern the signal from the noise each day.  I argue it even significantly contributed to the outcome of the 2016 presidential election.  Therefore, I am advocating that we emphasize fundamentals information consumption before it’s too late — without editors in between the content creators and the consumers, we must take it upon ourselves to filter out the noise ourselves by being literate consumers.  This should be emphasized and part of the literacy training every student receives in school — public or private.

A plea for literacy

Literacy is vital to a country’s ability to consume information, but without critical thinking and fact/resource checking, it can also be used against a populace through propaganda, or by throwing garbage “noise” information out there to cast doubt on things that are based on facts and are repeatable via science.

The USA is 7th in literacy rate of the 206 states in the world, according to the Washington Post in 2016.  This has improved from11th, which is a good trend, but it may represent how close our neighbors are in ranking — small changes in the calculations can cause swings of a few places.

The OECD rank places the USA is 21st in Reading.  Why the difference?  Well, the WaPo article factors in accessibility to literature and library content, and other social factors.  Scoring well on tests is one thing, but a nation’s “literary health” must-be assessed on more than just test scores, because en extremis knowing how to read but having limited or only biased information to consume does not make one fully “literate.”  We need to take the time to read and take advantage of the information resources around us as we need it.  Not just the internet where it’s “easier” to get information and also more likely to be self-limited–but via published books and public libraries from reputable editors and publishers.  Those editors and publishers are doing a service of ensuring a certain level of credibility and quality — you should trust information coming from a credible publisher before a random blog on the internet, or worse: a social media post or email.

The internet removes the “gate-keepers” from the information dissemination process, and therefore raises the “noise” in the signal-to-noise ratio of consuming information.  Having a digital way to get your content accessible is advantageous to the “little guy” who doesn’t have the money to pursue publishing and distribution by traditional pre-internet means.  There’s great stuff available, but “buyer beware.”  Blindly trusting what you find online is risky, and it puts the responsibility of determining quality and correctness completely on the consumer of the information, with little on the provider (which is where publishers/editors add value).

Think of it this way:  in pre-internet days, self-published was kind of a warning to the reader and raised the following question: why didn’t a publishing house invest in this content, if it’s truly quality material?  There are lots of stereotypical “uncles with half-baked ideas” that would self-publish pamphlets and comb-bound books out of their basements/garages, but they would have a difficult time distributing beyond their local spheres of influence.  The internet turns this completely on its head (this blog included!).  What I write here on WordPress is accessible to the world, but the chances of it being seen aren’t based solely on its content, but its popularity (number of clicks registered in a database somewhere, which can be artificially manipulated by web bots) or how much money I’m willing to pay to promote it (similar to what publishers do).  Being popular or promoted is not at all correlated with truth.  In fact, popularity and promotion are the most effective ways to disseminate false information… yet those are the very techniques used by Google  and other search engines to produce results when we search for things.  The research community uses something similar — the number of times a published work is referenced within the literature community for a specific subject matter.  It is also weighted by the credibility of the author and the publisher who are both known quantities within the community.  In my opinion, this is what is missing from algorithms like PageRank.

What information is worth internalizing?

With the internet at our fingertips, why bother reading, or learning at all for that matter?  Can’t we just rely on Google, or other search engines to look up things for us?  For facts/figures/trivia, yes.  For concepts and understanding that you can apply at a moment’s notice without needing to go “consult the oracle” all the time?  That’s where reading and doing come in.  The more you consume and execute upon, that is what you internalize.  And, once it’s internalized, it becomes much more efficient to act upon rather than being reliant on the internet for everything.

How do we do this?  How can we tell the worthwhile from the trivial or transitory?  The answer is to turn to how professional communities capture and distribute knowledge:  vetting and referencing.  These concepts are not difficult — however they are vital to ensuring that what we consume is not incorrect or existing only within a bubble of like-minded believers.

Ask yourself this: “Do I trust this source?  If so, why?  Can I defend the basis of my trust in it?”  You can build trust in a source based on its track record:  How many times has the source gotten things wrong?  Are there conflicting stories on the same site for no apparent reason?

 

What McAvoy/Sorkin’s “worst generation” is getting wrong, and how to fix it

Having the internet at one’s disposal, relying on search engines too much, and not internalizing core information relevant to decision making makes one run the risk of what a colleague of mine calls “fibrilation”: the inability to make progress due to constant reliance on changing information.  Knowing what is true (and being able to back it up with legitimate published facts) is critical to self-reliance and helps one focus towards goals.  We can’t know everything, and we must rely on others to survive and lead a full life — however, we must know what is core to OUR lives, and internalize that information so we can disseminate and use it at a moment’s notice, or even add to that body of knowledge with something new.

Search engine results change based on the profiling of the individual doing the search.  Perform the following experiment:  search for the same exact search string on two different people’s computers using Google — you’ll get a different list and a different ranking.  GOOGLE MAKES ASSUMPTIONS ABOUT THE END USER’S PREFERENCES AND INTENT BEFORE THE INFORMATION IS EVEN PRESENTED TO THEM.  For an eye-opening treatise on this subject, see Eli Parisher’s book “The Filter Bubble.”  So if we have a whole generation dependent upon technology for even basic knowledge, but that knowledge is being disseminated in different ways “tailored” to the end user, how can we expect consistency in the interpretation of the importance and relevance of the information presented?  This is the fallacy of search engines, and millions of end users don’t even know they’re being profiled and fed information by an algorithm making assumptions about who they are and what is their intent.  We as users of these systems need to take a stand and demand solid, repeatable, credible results of QUALITY not popularity.  This is why I no longer search with Google or Yahoo, and instead use DuckDuckGo.   You can too.  You’ll be ok.

So, my plea to reverse this trend is for people to consider doing the following:

  1. Always ask “who wrote what I am reading?” and “Is this a credible source?”
  2. When looking up information, use search engines that don’t track the end user and use that information to filter/rank what is shown, like DuckDuckGo.
  3. Use ad blockers and “private” modes of web browser.  Don’t leave any information for content providers to assume anything about you or your intentions.  It’s OK to pay for credible sources like true journalistic endeavors who need the money to pay good journalists to produce valuable content.  Think of it like subscribing to a newspaper — you’re paying for a service of vetted information that you can trust.
  4. Think.  Be critical of what you read.  If it seems too fantastic or weird, it probably is engineered that way to get more clicks/links/eyeballs to drive up popularity just to sell ads and make money online.  Sites like BuzzFeed built their entire business model on this tactic (again, see Pariser’s book mentioned above).   Don’t be more grist for the ad dollar mill (of which you don’t see a dime but can have your time wasted or worse be fed false information you mistake as true!) — instead, be an informed consumer of information.
  5. Use libraries and published works more than you use the internet for important information.  You’re less likely to be led astray by the “noise” and you’ll keep these valuable institutions from “dying off” due to lack of use.
  6. Always be willing to present links/references to back up your facts when writing content.  If you can’t, you’re not part of the solution, but perpetuating the “noise” problem.
  7. Be ready to call other folks on their falsehoods — ask for references.  If they can provide them, consider their credibility along with the person who’s referencing them.
  8. Social media is never a credible sole source — it is a glorified rumor mill where anyone can say anything, and it should be treated as such.

 

#BurstYourBubble

If there is nothing else #Election2016 has made apparent to me, it’s that “Social Media” has caused us to be even more isolated than ever, inside our own “bubbles” of like-minded interests.  Every social media site is based on a similar concept to following and interacting with accounts of interest that is driven by the individual making choices of what to follow.  Facebook has “friends”, Twitter has the paradigm of “following”, etc.  It’s too easy to be cynical and call it pure narcissism (though there’s plenty of that to go around), but there is definitely an inherent decision-making bias in us all to choose an information source that makes us remain comfortable as opposed to risking a challenge to our  world view at the time.

The larger societal danger of allowing individuals to choose what information they consume is that they will tend to not seek conflicting viewpoints outside of their chosen bubbles.   Republicans and industrialists won’t follow democrats and socialists, and vice versa.  This is the “path of least resistance” that avoids the risk of challenging their core beliefs and leading to us all viewing the world from inside our own self-constructed echo chambers.   Not all viewpoints are equally valid, but combined with critical thinking, expanding one’s world view helps one empathize with others with whom interaction would inevitably lead to conflict.

Echo-chamber-like bubbles do not promote this activity, in fact the more one stays inside a bubble and increases the members in the bubble, the more a conflicting viewpoint (no matter how common) seems like a minority, even though it may be a widely-held view, and therefore runs the risk of being dismissed because of a skewed view.  That makes it more difficult to accept things that aren’t in the bubbles we’ve constructed for ourselves.  We can think of it as a world-view equivalent to the economic bubbles like the housing bubble that recently effected the world economy in 2007/2008 which led to the banking crisis.  Each of us runs the risk of having the same thing happen to our core beliefs and understanding of how the world works — it may be completely skewed and detached from the real world outside our bubble… which I think for the majority of us is much larger than the bubbles we create for ourselves.

Empathy is important — but we can only achieve true empathy if we reach out and try to envision ourselves with the same influences and experiences that others have had… which means that we may have to challenge our daily way of thinking.  This takes effort, but the reward is being able to relate to others and potentially reach a compromise much faster than approaching a situation without empathy.

Therefore, I propose that regularly we all purposely burst our bubbles of social media that we’ve constructed for ourselves in today’s world by reaching out and trying to understand where a seemingly opposing view point is coming from.  Start doing it once a week, then twice a week, until you’re doing it at least once a day.  At that point, it will start becoming natural to think “why does this person feel this way” — even when that person may have a completely antithetical view to your own core beliefs.  After all, that other person has probably achieved their world view in a rational way from their perspective — but realize  that some people’s perspectives are skewed by environment and mental state:  these are factors that cannot be ignored.

While bursting a bubble, we also need to stay fact-based… do research on the claims and beliefs held by others, but be sure that you’re not fact checking from inside your own bubble!   Research is important to ferret out false news and facts which have been shown to have run rampant in social media and led to skewed beliefs leading up to the 2016 presidential election in the United States, as well as the “Brexit” in the United Kingdom.  All of this takes effort — which is why without consciously doing it, we will most likely stay inside our bubbles and be shocked when the world behaves differently than the world as we perceive it from within our bubbles thinks it should behave.

So #BurstYourBubble — It’s healthy, and can only help you relate to others.   Personally, I will be tweeting about things that alter my world view when I read about them, and using the aforementioned hash tag.  I encourage you to do the same, and help us all break out of the isolationist information/conceptual bubbles that are so easy to cocoon ourselves within.

Why OpenGL + X11 on the Raspberry Pi is such a big deal

In case you missed it, Raspberry Pi and the purveyors of the Raspbian Linux distribution enabled OpenGL hardware acceleration simultaneous with the X11 GUI running this past week with a new release of Raspbian.  In addition to a Mathematica 10.3 and a SonicPi update, they fixed a bunch of minor stuff with the GUI, but the OpenGL/X11 simultaneous support means that 3D applications for creative projects now have the Raspberry Pi as a possible platform.  Software like 3D Lego CAD programs based on LDraw and 3D creation software like FreeCAD and Blender for designing parts and other objects for production on 3D printers and other CNC machines can now be done on the cheapest computer one can build, bringing computers required to do the design phase of maker-space type projects to an even lower cost point.  Blender, in particular, is used in the 3D animation industry extensively, and even was a primary tool behind Elephant’s Dream, a completely open-source-tool created 3D animated film.

Here’s a screenshot of Blender running on my Raspberry Pi after the Raspbian upgrade (I don’t know how to use it yet… but one can learn!):

blender

to install blender:

sudo apt-get install blender

This is truly the golden age of computer accessibility… provided one gets comfortable with Linux, which in my opinion everyone should learn the POSIX way of computing.  Apple products have “Darwin” underneath MacOS X and iOS… an OpenSource BSD derivative upon which they layer their Finder and other proprietary interface software.  But… when you get command-line access in any of those machines, it’s pretty much like Linux under the hood.  The only outlier in today’s world?  Windows, whose terminals all emulate the antiquated DOS prompts of old.  That means that Raspberry Pis fit in just nicely with other Linux development machines and MacOS X — development knowledge you learn on one easily transfers to the other.

Here’s a shot of FreeCAD (simple part, I know… just for illustration purposes) running on my Raspberry Pi after the Raspbian upgrade:

FreeCAD.png

To install FreeCad:

sudo apt-get install freecad

More examples of what FreeCAD can do can be found on their website.

Maker Spaces, Hacker Spaces, FabLabs, and other derivatives of the Center for Bits and Atoms and the Media Lab at MIT all have open source software and hardware very firmly entrenched in their culture.  Raspberry Pi and Raspbian fit very well within this, and would further blur the lines between development machines and embedded linux computing in projects produced by the movement.  Creators can use their Raspberry Pi machines to design and create not only code, as has been the thrust of the platform to date, but also the enclosures and machinery that the project may need as well all from the same, inexpensive platform.

Here’s LeoCAD, a Lego(TM) CAD software that’s open source, based on LDraw, running on my Raspberry Pi:

LEGOcad.png

to install leoCAD:

sudo apt-get install ldraw-parts leocad

If you haven’t looked into the Make: world or any of the other links above, it is a pretty exciting world that is waiting for you with open arms.  The Raspberry Pi is only getting better and more useful with time as Raspbian improves.  If you’re even remotely curious, the cost of entry is so low (as low as $5 with the Raspberry Pi Zero!) that you’d be foolish to not check it out.  You never know where your personal rabbit hole will lead, and that’s the beauty of a creative community!

Stop the interruptions!

I recently had an Apple iWatch come into my life, and it really drove home how many interruptions I get in one day thanks to the apps I use.  Lots of articles talk about the detriments of constantly being poked/prodded/beeped-at/flashed-new-messages, etc. on productivity in our daily lives.

App developers have most of their notifications enabled by default, because the center of their world is providing a service for you via the app.  However, not every app needs to bug you 24/7.  Apple has correctly provided in iOS,  WatchOS, and MacOS X the “Notification Center,” which is your saving grace to filter out the madness.  Ideally, the only interruptions your software should provide you with are the ones YOU allow.

An important habit to adopt is to turn off all notifications when you get a new app (or an upgrade w/ new notification services), then add them if you need them.  The definition of “Need” will be different for different people based on their type of job and family needs, but I can tell you the questions I use for rationale:

  • Is this service “life critical,” meaning that if I miss something it will have immediate negative impacts in my personal or work life?  Example:  FaceTime — if someone is trying to face-time me, I equate this as an alternative to the telephone.  The only time I turn off the ringer on my telephone is when I have an answering machine/service to catch the missed calls (FaceTime will record who tried to contact you, but not hold messages).
  • Will this service be annoying?  Example: If I’m constantly getting pings for email messages, I will be swamped with pings potentially every minute of the work day.  This is ridiculous and is why I adopted the policy of checking my inbox MANUALLY every hour — no automatic updates.  Mac OS X has a “VIP” feature which takes emails from people on a special list and creates a different notification for that.  This has proven useful, but for the most part, hourly checking at the most frequent has been perfectly amenable to the folks I work with.
  • Does the service represent anything irreversibly time-critical?  Example:  Ebay — if there’s an item you really want and you want to know when you’ve been out-bid (especially near the auction end) you’ll want that interruption.  The key here is that it MAY show up at an inconvenient time… that’s up to you to manage.

… you get the idea.

If I had EVERYTHING turned on, i’d be swamped by:  email, messages, face time requests, news blurbs, financial messages from my bank, shipping notifications from packages, weather changes, status reports on physical activity, the list goes on… Really, the only things I need reminding of are things on my reminders lists (one-time items I put there for productivity and anti-procrastination reasons), calendar items so I can be where I’ve promised to be on time, and real-time communications requests when I am available to respond to them.  All the rest are “fluff” and just clutter my focus and cognitive space.

So, consider how many things beep/buzz/tap/vibrate/pop-up or do something else to vie for your attention, and see if you can cull out and minimize those things so you can regain control of focus in your life.  I did, and it made a WORLD of difference.