Is Distraction A Disease?

After my last post, I searched on distraction, and came across the article In Defense of Distraction. The author, Sam Anderson, discusses whether we’re faced with some modern plague that’s eroding our attention and ruining our lives.

Initially, the answer seems to be “Yes”, even according to experts. But he spends the rest of the article exploring idea, and eventually disagrees with this answer.

Competition For Attention

More people are alive today than have ever before been alive at once. And many people have professions that require, generate, or depend upon knowledge. As we create new knowledge, the abundance of information, along with the ease of sharing and access, creates a dilemma.

“What information consumes is rather obvious: It consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.”

More information competes for the finite resource of attention. The more information we digest, the less attention left for other tasks. You can try to divide your attention further, by multi-tasking, but we’re not designed to multi-task. Our brain’s architecture has bottlenecks which prevent parallel processing.

One of the people he interviews says:

“… even ten years ago. It was a lot calmer. There was a lot of opportunity for getting steady work done.”

But this viewpoint is based on a social and cultural environment from the past. And as our society and culture changes, so does the way our brain operates. Thanks to neuroplasticity. The author argues that we are currently, or are at least capable of, adapting to our new environment.

If people find the lack of attention so dangerous, why don’t they just opt out? Disconnect from the Internet, turn the phone to airplane mode, and do what they might otherwise be too distracted to do? Saying there’s no longer an opportunity to do steady work implies the distractions are impossible to avoid. But the more likely answer is that we find the distraction impossible to resist.

Addicted to Distraction?

In my previous post, I mentioned the variable reward nature of reading articles online. I’ve heard this idea before, particularly related to checking email. This is also mentioned by Sam:

As B. F. Skinner’s army of lever-pressing rats and pigeons taught us, the most irresistible reward schedule is not, counterintuitively, the one in which we’re rewarded constantly but something called “variable ratio schedule,” in which the rewards arrive at random. And that randomness is practically the Internet’s defining feature: It dispenses its never-ending little shots of positivity—a life-changing e-mail here, a funny YouTube video there—in gloriously unpredictable cycles.

Checking for new, interesting articles can be addictive. There’s the unpredictable chance I’ll find one that’s life-changing. And if I don’t check often, I potentially miss out on that reward. Spread this across multiple online services, and one can spend the entire day toggling between browser tabs in a quest for more dopamine releases.

So, people don’t, won’t, or can’t opt out from the distractions, because they’re addictive and our brains crave them. I can see this as an explanation for why attention is decreasing these days, but it seems a poor excuse for why it continues to be an issue.

While addictions are real and exist, we aren’t powerless about them. We can overcome additions, especially when we understand them and have a real motivation to eliminate them.

Jackhammers

An analogy used in the article is “jackhammers”, or the things that take away your attention.

For Gallagher, everything comes down to that one big choice: investing your attention wisely or not. “The jackhammers are everywhere—iPhones, e-mail, cancer—and Western culture’s attentional crisis is mainly a widespread failure to ignore them.”

Instead of flexing our ability of executive control, or attentional self-control, we let the shiny objects distract us.

“You can’t be happy all the time,” Gallagher tells me, “but you can pretty much focus all the time. That’s about as good as it gets.”

Except, this sounds like addiction to productivity or focus. Even meditation seems co-opted to increase productivity and focus attention. What would Buddha think of that?

Then, the article delves into the topics of neuroenhancers and lifehacks, the embodiments of this addiction to increasing productivity.

Neuroenhancers

“Neuroenhancers spring from the same source as the problem they’re designed to correct: our lust for achievement in defiance of natural constraints.”

To legally use neuroenhancers, one needs a precription, but this might change as public sentiment for the enhancers changes. People use supplements of many kinds, illicit or not, to push past their barriers. Aren’t they akin to protein shakes and caffeine-laden coffee?

We try to push past natural human limits by using these neuroenhancers. But what new limits will we find beyond our current horizon? And will we search for new drugs to push past those as well? Rinse and repeat.

Lifehacking

Lifehacking is the self-help phenomenom of using tips, tricks, or hacks to get yourself to do more things and keep from procrastinating in life.

What drives people to search for lifehacks? Are they so cripplingly unproductive they use these to turn their life around? Or are they already productive but now in search of a “better high”? Neuroenhancers are one type of lifehack.

“Where you allow your attention to go ultimately says more about you as a human being than anything that you put in your mission statement,” he continues.

This is interesting. Like the idea “it’s what you do that defines you”. Or “what you think, you become”. Your thoughts and emotions and preoccupations shape, in a large way, the person you become. Surround yourself with people who are like who you want to become.

Seems to make sense. If you’re mindful of where you put your attention, you’ll have power to control what kind of person you are. You can shape who you become, by choosing what you focus on. If you drift through life thoughtlessly and aimlessly, it’d be no surprise to end up as someone you’re not happy being.

Addicted to Productivity?

The variable ratio schedule of distraction can lead to dependence and addiction. But does productivity also follow a variable ratio schedule?

Like a person frequently refreshing their email inbox, to see if there’s something new, can a person repeatedly try to focus and be productive? When they fail to focus, that’s like not having a new email; they’re encouraged to try again shortly. But when they are productive, that’s like seeing a new email; it delivers that dopamine release, and reinforces their behavior.

Finding lifehacks related to productivity fills both these roles. The distraction is a little reward jolt. And it feels more productive, because you’re reading about being productive. And then you can apply the tip, and that might make you more productive. Or at least feel more productive, which is likely all that matters. If you’re then more productive, you get another little reward jolt.

In our quest for productivity, we’ll work and focus. And we’ll feel good about that. But will we soon find ourselves highly-productive, yet still unhappy? Highly-productive will be our new normal, our new baseline, and we’ll be unhappy with that in short order. This will drive us to seek new lifehacks, new neuroenhancers, in order to push past our new normal productivity levels and reach a higher ledge – to be ultra-highly-productive.

Necessary Distraction

Sam also suggest that distraction is necessary to focusing later.

This sort of free-associative wandering is essential to the creative process; one moment of judicious unmindfulness can inspire thousands of hours of mindfulness.

If I never followed any of my distractions or other thoughts; if I never allowed myself to explore tangential ideas, then several of my recent posts would never exist. And creativity is connecting existing ideas in novel ways. If we focus on following what we know, just to be productive, we preclude seeing things from another angle.

Focus is a paradox—it has distraction built into it. The two are symbiotic…”

Fractal Thoughts

Our thoughts are fractal. Each sentence is composed of many words, and each word has many related thoughts. Each word within that related thought has additional, related thoughts. So you can explore one topic within a paragraph and find an entire world of ideas to entertain and delve into.

The hyperlinking nature of the web is analogous to the linking our mind does with memories. I can think of being a child, then swimming at the city pool, then the belly flop which lead to a lifeguard rescuing me, and how I later became a lifeguard, being tan for the summer, but quickly losing it in the fall. Each of these a narrative arc boiled down to a single phrase.

Benefits of Inattention

He suggests that ADHD may be beneficial. Maybe it’s an adaptation to our new world. I’ve heard this in another article too. Children are diagnosed with ADHD, because they don’t fit into the typical school system and its learning style. But these children can grow up to be successful adults, because they’re free to find work that fits their personality and learning style. We require all our children to learn in the same fashion, which spawns learning disorders, but adults have more freedom in how they approach life, so that those “disorders” can be advantages in their own right.

Both Can Be Advantageous

Will our culture become more fractal, and thus favor people whose thinking mirrors that nature? I expect we’ll find that shallow, wide focus will prove valuable just like deep, narrow focus can be. The focus used will depend on the task at hand. Attention isn’t one-size-fits-all; nothing in life is. The real power lies in knowing which to use for greatest effect.

The Purpose of Distraction

The smartphone: a gadget with horsepower and potential. It’s useful as an immediately-accessible notebook, a navigation aide, or instantaneous-communication enabler. Yet, I typically use its endless-distraction feature – I read articles, or skim social sites.

It’s easy to say I’m piddling away my time with meaningless distractions, especially when I flit between multiple things. But this phrase “meaningless distraction” has a weight and connotation. Is the distraction useless, meaningless, and time-wasting? Or does it serve some purpose?

Playing a game, reading an article, scrolling past photos from friends – they aren’t in and of themselves meaningless. But they aren’t something I imagine myself liking to, or needing to, do. They don’t help fulfill my goals, or make me feel productive. Yet, I do them anyway.

There are often times when I’m not sure what I want to do. I have many options, but none seem immediately appealing. This is when a distraction bridges that uncertainty gap. It’ll do until I finally make up my mind.

Playing around on the phone is an easy out. And there’s also the variable reward part of it. Occasionally, I’ll see an article that’s worthwhile and thought-provoking. If I gave that up completely, I’d miss those. In that light, it seems about balance.

Even without the variable reward, it’s nice to take a mental break. Ahh, the mental break… So is distraction a way to mentally check out for a short time? A way to recoup some mental energy, so, in a little while, I can continue on with the day?

I can think of other tasks I use to clear my mind. Like doing the dishes because it doesn’t require much thought. Or hoping in the shower because my mind is free to wander. Like lying in bed because I woke up early and don’t want to get up yet. Or watching an episode of a show on Netflix because it doesn’t require the mental focus that reading a chapter in a book would.

In some sense, if I can’t decide what to do, then it’s decided for me. I’ll opt for distraction. And that distraction must be the thing I really want to do, at that moment. Otherwise, why the hell am I doing it?

If I later do some “productive” task, what’s the harm? Perhaps the distraction helps me get a second wind. Clears my mind and helps decide what to do next.

It’s appealing to count the “productive” hours in a day, but we aren’t machines. One cannot ramp up a thousand RPMs and crank out more productive hours. Not on a continual basis, at least. Downtime and distraction can serve to recharge and refocus.

Further, I wonder if it’s better to entertain a distraction than to do some task because I feel compelled to. The distraction has the potential to give me energy to hop into something I want to do, whereas the guilt-inspired task will just drain me.

There’ll always be more I want to accomplish than I’m be able to. But I can’t let that weigh heavily on me. So long as my life isn’t constant distraction, I won’t worry much about taking small breaks.

Our Coming Hive

I read another article tonight called Hive consciousness. It got me thinking, and, though it didn’t start out that way, my writings here seem a follow-on article to my post Undreamt Networks.

Storytelling

Storytelling allowed humanity to shift from reactionary bag of meat into a foresightful bag of meat. Instead of only responding to sensory stimulus, we can decide whether we’ll respond.

And we use our pattern matching to know that these prints in the dirt, along with the broken branches, mean that prey has been through here.

We told ourselves stories when we thought of the hunt. That our prey was in search of water, since there’s a stream nearby, and we can catch them there now.

Because we got the dopamine reenforcement when those stories were correct, or dopamine withheld when those stories were wrong, we told ourselves stories more often. We got better at storytelling with that feedback loop.

Seeing the future is something we easily do. We’ve modified the world around us more than any other creature in history, thanks to our predictions.

This has huge ties to the ideas in the Ishmael books by Daniel Quinn. They changed my perspective and outlook a lot, so I enjoy the opportunity to integrate it with other material.

Conciousness

According to research mentioned in the article, when the hemispheres of our brain are split in two, we form two personalities. One in each hemisphere. When connected via the corpus callosum, as is normal, a single personality takes stage. The personality that we know and are familiar with. Running on dual cores. A whole greater than the pieces.

Or the pieces might fall apart. It might only be temporary, by anesthetizing one half of the brain, but that’s enough to create a new personality which operates on the single hemisphere, the single core.

Our consciousness expands to fill all the available space, like gas in a container.

So if we connect many brains together, there should emerge a consciousness that’s more than the sum of the pieces. But the brains must be connected with low latency and high bandwidth. And with the extra grey matter, the consciousness should adapt. The individual subsumed by the larger personality powered by the larger resources.

Once we connect a hundred brains, a personality different from all of those hundred, individual minds will exist. When one brain is disconnected from that larger group-brain, the personality shifts some. Changes. It adapts to the neurons it has available. Still an “individual”; just not one we’ve ever known.

The Hiccups

The article asks us to think of what safe guards we’ll need. One jumped out at me as I red the end of the article: we need to ensure that we, as we currently are, the individual person we are before we hook into the group, exist, on our own, for at least part of the day. In order to keep our personalities, to some degree, there must be time limits on how long you can be part of the hive.

Then again, what happens when the interface has a hiccup? Our brains experience sleep, seizures, blackouts, and hangovers. Our technology experiences obsolescence, hard restarts, lag, and out-of-memory exceptions.

Will we have schizophrenia, and bipolar disorder, and gambling addiction, and stage fright when we have more grey matter? I’m willing to bet there’s a whole range of disorders and phobias that only manifest when operating with more cores. In the same way that a beetle doesn’t have OCD, and a single-threaded program doesn’t need concurrency.

Our Personality

Is personality like a muscle? If you don’t exercise it, does it weaken? If we are attached to the group for years straight, and then disconnect, will our regular personality still exist? Will it be there waiting for us to return? As if we’d just parked a car at the airport and it’s ready to drive when we return?

Does connecting to a hive mind, and experiencing the thoughts through the larger consciousness change our brain and alter our singular personality? Perhaps just by connecting to it, we’re changing our regular self.

But this is weird. Some times you’ll exist. And other times you’ll disappear into the brainsoup of the collective brain. And even that hive mind doesn’t have its own, fixed personality. Adding or removing you from the pool changes it. The personality would be likely be chaotic and shifting. Is that even a way for a consciousness to successfully exist?

Perhaps it’ll tear itself apart through flux. Or, the consciousness will never come to know boredom and be able to focus orders-of-magnitude better than we can, since we can become content and accustomed to our surroundings, which wouldn’t happen with this constantly shifting of consciousness.

What Is Self?

Why do we even care about the notion of our self being who we truly are? Is it just romantic? It’s already fleeting, in that our self is gone when we die.

Additionally, the self is only the way it is thanks to the chance of being born, the experiences we have, and fortune of being restricted to two hemispheres of one brain.

As long as we have a consciousness, isn’t that perfectly fine? We can be part of the larger mind and still be just as much alive as when we’re solo. Perhaps even more alive in the group, by unlocking new potentials.

This solo self is all we’ve known though. And the uncertainty is frightening. It’s something we’ll confront though. Some people won’t ever consider it. Others would rather never go back.

Could We Go Back?

Maybe once you’re connected, there’s no way to know the other side. Maybe you can’t even remember there’s a “smaller” person waiting for your grey matter when it’s unplugged from all the others. Unless you’re forced to disconnect. Or you’re told about it.

And you can “know” that idea as fact even if you can’t “grasp” and “feel” and “understand” it. In the same way that I know other people exist and are their own beings as real and complete and alive as I am, but I can’t know, feel, or understand what that truly means. Empathy isn’t Knowing.

The Mutable Self

Or perhaps, everything is relative, and the personality is fragile, mutable, and malleable, like we’d never expect it to be.

After all, which of us has the same personality as we had at age 5, or 15, or 25? Or even a year ago? We’re a person who exists in a single body and single consciousness. But that definition of “single” only makes sense at a high-enough vantage point.

Our current body has none of the same cells it had when we were born. If you look closer and closer, the atomic self becomes the quantum foam of age, location, and experience. We’re no longer the same cells as at birth. We’re no longer the same personality as when a child.

How do we perceive that the person in the past is Us? Sharing memories of that child and sharing the same genes as the baby gives us the ability to say we’re still that person.

When we come to share knowledge, ideas, feelings, and memories with more grey matter at a lower latency and higher bandwidth, over a longer period of time, perhaps that will be what we consider our true self.

Going forward, we’ll fill out the galaxy of being and form asteroids, planets, suns, neutron stars, and black holes of consciousness. The genetics will fade to irrelevant.

A Drop in the Cosmos

Can we connect rat brains and use them to think human thoughts? Are there some algorithms or thoughts that we can only experience when we’ve got access to 20,000 brainpower under a single hood? What happens to those of us who won’t or can’t be part of that?

Will our species split? The nautilus has existed for hundreds of millions of years. It’s not the exact creature it was then, but it’s pretty damn close.

This may be what Arthur C. Clark in his Space Odyssey book series and Christopher Nolan in his Interstellar film allude to. A kind of being we can’t comprehend.

The neuron is not itself aware, even though what it comprises is aware, even of that individual neuron.

Perhaps there is room for humanity as we know it today, even if some become a different beast. Unless we keep the trend of total war, in which case only one will survive.

Will Self Driving Cars Sacrifice You?

Earlier today, I read an article entitled Self-Driving Cars and the Trolley Problem.

He mentions Asimov’s Laws of Robotics, which are designed to minimize the amount of harm robots could do to or allow to happen to humans or humanity.

The Trolley Problem

The main concern he raises is a philosophical issue about self-driving cars – The Trolley Problem. How they would an autonomous vehicle react in a lose-lose situation? If your car had was at risk of colliding with either a car with 5 people in it, or a car with one person in it, which should your car “allow” to happen?

At first, this seems an academic exercise, until you realize that it’s quite feasible. Many human drivers have already grappled with these kind of split-second decisions, though I’m sure it’s more instinctual or knee-jerk reaction for people. Not always though; people trained to learn how to steer out of a skid would fair better in some cases.

Who Should They Save?

There’s the utilitarian model of saving the most number of lives, or, alternatively, killing the fewest people. But then what about if the 5 people were criminals escaping from a bank robbery? And the one person in the car was a cancer scientist who just made a major breakthrough? Perhaps it’d be best, then, to wreck into the car of criminals, since the scientist has a higher value?

This line of thought is disturbing, in that it assumes we can assign any objective value to a human life. That we can compare two completely different individuals and know which one has more value. There’s no such thing as objective human value, and this reasoning feels feels like step toward eugenics.

What About Liars?

This hasn’t even considered more nefarious scenarios, like sending out incorrect data to other vehicles, which causes a chain-reaction of deadly activity. What about, what I’ll call the “Lying Trolley Problem”? A group of colluding cars broadcast a fake signal for “collision with 5 people right ahead”, so trailing cars divert and kill pedestrians on sidewalks or families playing in their yard, even though there was no real danger on the road.

Gaming the Value System

I sudder at the thought of an algorithm defining human worth, because there are so many ways to abuse it.

If cars had some algorithm to assign a “human value weight” to a car, a sole passenger could send fake data saying that they have 15 high-value passengers aboard at all times. Other cars would never impact this car, because its “human worth quotient” is so high. I could see an industry of hackers to bump up the value of your vehicle. This could then lead to “grade inflation”, trending toward infinity.

Perhaps there would even be a legal industry for “Human Life Value Optimization”, akin to Search Engine Optimization today. People pay for knowledge on what they should do to enhance their life’s value, at least where the algorithm is concerned. They could also embellish or outright lie about data to improve their score. People who can’t afford these services will be more likely to die than those who have the cash to spend.

Would a car assign your vehicle a higher value if you were on a road with many billboards? Google or Apple get more ad impressions from your car, so they have more incentive for you to live as opposed to someone who’s on a road with no revenue potential.

There Is No Objective Value

Back to the human worth issue: Social and moral values shift over time, as we can see with various human rights movements throughout time. Any value we assign to a human life now will not be the same value we assign it later.

Additionally, any value given to a person is only based on their past and current life. It cannot take into account things they may do in the future. Perhaps one of those criminals will reform and prove faster-than-light travel feasible? We’ll never know right now, because it hasn’t happened yet. I’m not sure how valid this argument is, but it adds weight to the notion that objective human worth/value is relative and short-sided.

Accidents Are Accidents

Will the first autonomous vehicles have some sort of Collision Choice Processor? They’d have to be well-informed with enormous amounts of data, in real-time, and have a good algorithm or learning capability to process this information. Learning isn’t flawless. Decisions are only as good as the information available, along with the way it’s processed. Decisions, information, and learning will get better in the future, but there’s still room for error.

Accidents are, by their very nature, not done on purpose. Computer-driven cars will still have accidents. There will be times when there was not enough available information to make a better decision. There will be “acts of god”, where a landslide carries vehicles away and there’s nothing the computer can do.

And Bugs Too

What about software bugs? It’s impossible that these self-driving cars are flawless. We humans made them and we are fallible. As a result, so are our creations. There’s a lot we don’t know, and we make mistakes. Who is responsible for injury or death when the crash was caused by a programming error, false-positive in learned knowledge, a faulty vision sensor, or a cosmic ray?

Would a better bet be to make the cars very defensive? With their array of sensors, current knowledge, past knowledge, and possible communication with other vehicles on the road, can we drastically reduce the number of crashes, particularly fatal ones? If the car couldn’t avoid collision, perhaps it could make the collision less severe? Instead of t-boning another car, it could impact the trunk instead.

Contradictory Choices

But, even here, there may be contradictory outcomes computed by the autonomous cars. They both try to act in a “beneficial” way that ends up being more disastrous to both cars and occupants. Again, that’s an accident. Your “best decisions” won’t always be globally optimal.

This is even trickier when you consider a collision between a self-driving car and a human-driven car. The software won’t always reliably forecast what the human will do. Perhaps the driver’s suffered a heart attack, and their hands spasm and send the car careening in various directions. Or they’re intoxicated and weaving across lanes. What does the car do here?

Conclusion

The issues raised in the original article are quite interesting, and it begets many more issues. I wonder how these issues are being handled at places like Google? How will we decide to handle them in our system of laws?

Part of me fears for the uncertainty behind self-driving cars, but humans don’t have a great track record either. Perhaps autonomous cars will be the lesser of two evils?

The Streetlight

The streetlight stood ten feet tall, along a cobblestone path in a small, green park called Maltrova. The sun had just slipped under the horizon, taking the day with it.

Its base was bell-shaped, but had ornamental lion’s feet at the cardinal directions. The claws of each foot were extended and poised to sink into the earth.

The oxidized bronze of the streetlight was weather-worn and had trails about it from where the frequent rain ran down to the grass to form small puddles. But there were no puddles this evening. There hadn’t been rain for some weeks.

The base domed to a small bump which then gave way to the main stalk. This stem had a sinusoidal quality to the surface, which spoke to the waves crashing in the distance, just over and down the cliff face that terminated the park’s lawn.

Perched at the top of the post was a four-pane lampshade containing a giant bulb. And the bulb had just flashed on. The bulb was of an older style; it cast a warm and familiar glow, but was quite inefficient. It lasted at most a few months before its filament burnt out. The open air bottom of the shade was quite useful for frequent replacements.

Heat from the coiled coil quickly warmed the air in contact with the glass shell, and that air in turn rose to the top of the shade. There it would cool, and be displaced by newly warmed air just leaving the bulb. Convection ruled in this tiny system.

The open bottom of lampshade also gave access to insects seeking light and warmth, but had another benefit of not keeping any of the carcasses that would otherwise quickly accumulate and decrease the visual aesthetic of the streetlight. That aesthetic was the sole reason to maintain this demanding fixture in the first place.

The faint hum of its operation radiated toward the trees, but their leaves absorbed it, and it went no further. A breeze stirred and the swaying leaves speckled the ground with shadow and light.

A few airborne particulate clanked off the streetlight’s glass panes and metal stem to fall silently into the grass. Several more of these grains came down, sounding like metallic rain upon the pathway.

A flash of light in the sky suddenly outshone the lamp, whose shadow swept an arc across the lawn. The lamp’s dominance on illumination returned, and the breeze failed.

A few seconds later, a high pressure air wave met the panes in the lamp shade. The glass trembled, became a spiderweb, and then fractured into a million shards. For a fraction of a second, the ground had a terrific kaleidoscope of lightwork. But the bulb then shattered as well.

The park fell into darkness as small rocks tumbled down. Light did briefly return by way of larger, superheated stones which clattered about and dented the base of the streetlight. Sizzling boulders then collided with the lawn, throwing dirt and mulch and grass clippings into the air. They rebounded and traveled on haphazardly.

The remainder of the meteor then impacted the streetlight, and, with a screech, the metal frame gave way. The now-meteorite plowed through hedges as the lamp post lay contorted on the ground – no longer sinusoidal.

Sirens sounded in the distance as the night deepened.


The inspiration for this story struck last night while I attended my first Streetlight Manifesto concert at the Ogden Theater in Denver, Colorado.

Undreamt Networks

The Way It Is

A goal in software is to create small, reusable components. Sort of how simple, metal beams can be used to create buildings, bridges, or space stations. To take individual pieces and put them together in novel ways, to accomplish some new dream.

Electronics have grown to embrace this idea in a real way. Processors, memory, storage, and many other components are inter-connectable, and can be fashioned into anything from laptops, to satellites, to particle accelerators.

These small pieces stand alone, but are more useful when connected to other pieces. Notice the similarity to ideas? Our brains take tiny neurons which fire individually and build a sophisticated network of memories, feelings, and the amazing consciousness of which we know so little.

Who hasn’t heard that creativity is really just connecting the same dots in a different way? In the shower, you’re thinking regular thoughts, but you make a different connection between them, which results in your serendipitous revelation. You then hurry to finish showering, to make note of your idea before it fades away!

Lately, my goal with writing has shifted to building up a collection of small ideas. Ones that stand on their own, but are more useful when related to others. Turning these ideas into words, and seeing them in relation to their siblings is useful. The ideas are then clearer to me, and I can realize how they’re tied together.

The hope is to eventually build larger ideas and writings. But, in the same way that you don’t build a bridge from one piece of metal, these larger pieces are fashioned from connecting smaller ones.

Beyond

To connect metal, weld. To connect software, make a call. To connect web pages, link them. These networks of tangible or intangible entities enable us. We take one, well-designed thing and get more mileage from it when it’s connected to some other thing.

Our brain connects billions of nerve cells into a network of staggering capability. But this only works within a single organism. What about linking organisms together?

We’ve invented (or discovered; we may never know) speech, oral traditions, writing, books, music, radio, film, video games, and photography. These are all meant to communicate ideas from one person to another. They’re a means to connect the neuron that is me to the neuron that is you. And our culture and civilizations have exploded with the force multiplied through these connections.

But computers fail. People pass away. Books and photos burn. Memories and radio transmissions fade. Knowledge is lost and rediscovered. But that’s only the way it’s been, not the way it has to be.

To step beyond our current position, we must invent (or discover) how to better connect people; how to store our knowledge in small pieces we can link in elaborate and impressive ways.

What’s the next step beyond books, colleges, the Internet, and all other traditional forms of sharing knowledge?

How do we connect humans to one another, to the living world, to beyond? How do we join all these neurons and power the consciousness that arises from that network?

It lies in learning to remix what is now in new and not-yet-obvious ways.

The only way we’ve found to get anywhere is to leave something behind. (Thanks, Interstellar!) Let’s leave behind our fragile books and lone consciousness as we propel future generations on toward that new, distant shore.

How do we begin? Experiment.

What The Weekend Brings

It’s now Saturday, and I’m hit with two conflicting thoughts.

The first is:

It’s the weekend, so now I can work on my side projects!

This is appealing because I have all today and tomorrow in which to make progress toward my goals of software, writing, or drawing. I don’t have this much free time available during the week, so it makes sense to capitalize on it.

The second thought is:

It’s the weekend, so now I can watch a TV show, take a nap, hang out with friends, or read!

This is appealing because I’ve worked throughout the week, and now is the perfect time to do something away from the computer. It’s important to recharge to avoid burnout.

I say they’re conflicting thoughts because one involves working on projects, and the other means taking a break from projects.

What to Do?

But, when framed this way, these thoughts lead me to believe I’ll either spend the entire day doing one thing, or the other.

Realistically, I should be able to do a bit of both. On a weekend day, where I’m setting the schedule, there is enough time to write for a while, watch some Netflix, and then also hang out with friends.

It’s one thing to realize there’s a chance for balance, but it’s another thing entirely to know and feel it.

It’s curious I think this way, and I’m not entirely sure why I do. I believe my father has this tendency, so maybe it rubbed off? That, or time during week nights is in limited supply, so it usually is one thing or the other. And since there are more week nights than weekend days, the one-or-the-other mentality is my normal one, which carries over to the weekend, even when it isn’t applicable.

Deciding

What can easily happen, though, is that I waffle over what I should do. Do I work on everything-wordpress to add image uploading, or a blog post like this, or the Fourth Mechanism (part of the Mechanism Collection), or another story that’s brewing in my mind, or my one minute timer application, or something else entirely?

It’s difficult to settle on one idea when there are so many competing for attention, and each are worthwhile.

So what, then? I’ll probably just piddle my time away on distractions, or flit between multiple things, without actually focusing very well on anything

Break It Up

Perhaps this really means I’m not good at segmenting my time? I don’t regularly give myself 1 hour for this, 2 hours for that, 30 minutes for the other. But on the weekend, should I really even have that mindset? Especially if it’s leisure time?

If I really want to get multiple things done, I suppose some amount of managing time in the day is still necessary.

Just Pick Something Already

Today, I took the approach of doing what struck my fancy. I read while walking on the treadmill for an hour, and then I wrote (part of which turned into this post), and then I worked on some code.

There are other ways to decide what to do, but a reliable method is to just go with the flow and do what interests you in the moment.

Marketdown: Exploring DCI

I recently read a book titled Clean Ruby by Jim Gay. It wasn’t too long, but I found the content quite interesting.

In it, he describes a way to write cleaner Ruby code using the Data-Context-Interaction (DCI) pattern.

Thin Controller; Fat Model

In a typical Rails app, which ascribes to “Thin Controller; Fat Model”, each model has many responsibilities. Instead of having one enormous model file, we can group related methods or behavior, and extract them into modules/concerns. We then include them on the model.

The downside of this approach is that every instance of that model class has every one of these methods on them, all of the time. Regardless of what controller action was invoked, each instance has all these methods, most of which are likely not used during this single action.

Service Objects

To slim down our models, one option is to use service objects. This is an approach I like. You can break your code into smaller pieces based on responsibility and use patterns like decorators, presenters, observers, and others. Data-Context-Interaction seems like an extension of service objects.

Explaining DCI

The “service” in this case is really the Context. The context is the thing we’re trying to accomplish. If we’re purchasing a book, then the context is Purchasing.

A context may have one or more objects, the Data, which interact to accomplish some task. The data play “roles” in this context. To purchase a book, we can imagine two roles: Purchaser and Book.

The interaction between the data in this context is what glues things together. Here, that interaction could be complete_purchase.

Gems for DCI

Jim created the Surrounded and Surrounded-Rails gems to allow us to use DCI in our Ruby/Rails applications.

Note: I did have issues where the surrounded-rails didn’t seem to do what it promised by automatically include Surrounded in the models.

My Demo

Since I’ve been interested in writings and markdown lately, I decided to create a small Rails 4 app which would allow people to sign in and create a book, which other people could then purchase.

I called it Marketdown. You can view the live demo as well as the code.

You can test out the demo yourself:

  • Type in a username, and sign in
    • No passwords here, it’s really barebones
  • Create a book yourself
    • You can entire Markdown, which will then be rendered as HTML when people view your book
    • HTML should be escaped, to hopefully prevent some maliciousness
  • Purchase a book someone else has created
    • Don’t worry, there’s no credit cards or anything.

Once you purchase a book, the site indicates that.

That’s about the extent of this demo.

Digging In

The most interesting part of the demo is how purchasing a book is handled. The context, as we mentioned above, is Purchasing.

In the site, we have users. And a user who wants to purchase a book plays the purchaser role. The book they’re purchasing, no surprise, plays the book role.

The benefit of roles is that a generic class like User can be used, but given a more meaningful name in this context, based on the role.

We “trigger” the interaction between these two roles with a method called #complete_purchase.

The trigger contains the business logic behind purchasing.

  • A user has to be logged in
  • An author can’t purchase their own book
  • A user can’t purchase the same book twice

The context is where this business logic lives. It doesn’t have to be stuck in the controller, on in the model. The controller just invokes the context.

Another huge advantage of DCI and the Surrounded gem is that we can add behavior to each role, only in this context. The methods are added to the role’s instance, scoped to this context. The instance starts playing the role, so it gains some additional behavior. When the instance is done playing the role, it loses the behavior.

This seems like a great way to limit behavior to specific instances, to help keep your models clean. You only add behavior to instances just when you need it. Your models retain their persistence, validation, and other ActiveRecord magic all the time, but they receive additional behavior based on the roles they play.

When a user is signing up, neither the User class nor the user instance need the #owns_book? method. They’re outside the Purchasing context, so they don’t have it.

In another context, Authoring, an author can #publish_book, because they need that behavior. But outside this context, the user doesn’t need that ability.

Discussion

After talking with my coworker, Zac, about this pattern, there is a downside to DCI.

We’re used to creating classes, which give behavior to instances of that class. It’s standard OO. But it would be surprising for someone to look at the above demo, without understanding the concept behind DCI, and understand what’s going on. It violates the principle of least astonishment.

Perhaps it’s just that this pattern is new and unfamiliar. With a bit of communication and understanding what problems it solves, that could become clearer.

But it is another abstraction we’ve added to our design and mental model of the software. Perhaps the ideas are worthwhile, but is the cognitive overhead of the pattern more valuable than the cost to learn, understand, and apply it? Zac gave several links which had interesting discussion about abstraction.

Concluding

I’ve only just become aware of the DCI pattern, and Marketdown is my first foray into applying it. I’ve not had any production experience with it. It does pique my interest though.

I like the concept of breaking an application’s features into contexts, which isolate behavior to their roles, and allow them to interact to accomplish something larger.

I would like to see what the software’s design looks like when DCI is applied to real problems on a larger scale. I’ll keep an eye our for other projects that might use this. And I’ll see if I can apply it to a future side project of mine.

Writing this post alone has helped me better understand the concepts, and I hope it’s useful to someone else. Thanks for reading!

Using Before Blocks In RSpec

RSpec is a handy tool for writing tests in Ruby. Writing tests means constantly learning. It takes a long time to learn what to test, how to test, and how not to bite yourself in the ass later. And even then, you’ll learn more as you write your next set of tests.

The latest tests I’ve worked with are controller specs, to assert the behavior and output from Rails controller actions.

They have evolved to look something like:

Note: The code blocks below have some pseudocode. I want to convey my meaning here, rather than give 100% working code.

require 'spec_helper'

describe SomethingsController do
  describe 'GET index' do
    describe 'with an existing something' do
      let!(:something) { FactoryGirl.create :something }
      let(:action)     { get :index }

      # This is what we're interested in, for this post.
      before do
        action
      end

      it 'returns okay' do
        expect(actual_status).to eq expected_status
      end

      it 'has the right output' do
        expect(actual_output).to eq expected_output
      end
    end
  end
end

The topic of this post is the before block. It calls the action before each it block runs.

This helps reduce duplication across assertions, particularly when there are many. It also ensures we run the action for each assertion. It can be easy to overlook a missing action call.

But, there is a downside to this pattern. I believe RSpec runs before blocks as they are encountered. This means that, if you include shared examples, which may include other contexts and other shared examples, then your tests might not run as you expect.

Consider if we now want to authorize an API token given with the request. Since we want to do this across many controllers, we can include contexts and shared examples to ensure this same behavior in many areas.

require 'spec_helper'

describe SomethingsController do
  describe 'GET index' do
    describe 'with an existing something'
      # NEW!
      include_context 'with API token'

      let!(:something) { FactoryGirl.create :something }
      let(:action)     { get :index }

      before do
        action
      end

      # NEW!
      include_examples 'authorize API token'

      it 'returns okay' do
        expect{actual_status}.to eq expected_status
      end

      it 'has the right output' do
        expect{actual_output}.to eq expected_output
      end
    end
  end
end

The shared example would look something like:

shared_examples 'authorize API token' do
  describe 'with no API token' do
    before do
      remove_header :api_token
    end

    it 'returns unauthorized status' do
      expect(actual_status).to eq unauthorized_status_code
    end
  end

  describe 'with invalid API token' do
    before do
      set_header :api_token, 'AnInvalidApiTokenHere'
    end

    it 'returns unauthorized status' do
      expect(actual_status).to eq unauthorized_status_code
    end
  end
end

Now, when we run the specs, we get a failure. The actual_status is 200 instead of the 401 we expect. Why is this?

In the shared examples, we modify the request’s headers, right? But we didn’t modify the action like we expected to. The action already ran, thanks to that before block in the controller spec. So the action used the regular, valid headers, because the contexts in which we modify the headers were included after the action had run.

We didn’t modify the action/request before it ran, like we wanted. So we didn’t get the status we expected.

But we can fix this issue. The solution is to yank the before { action } piece, and call the action in each it block. This way we can modify contexts as we need, and only run the action right before we check the assertion. This is really what we want.

The controller specs, updated as described above, would look something like:

require 'spec_helper'

describe SomethingsController do
  describe 'GET index' do
    describe 'with an existing something'
      include_context 'with API token'

      let!(:something) { FactoryGirl.create :something }
      let(:action)     { get :index }

      # We removed the before block with the action...

      include_examples 'authorize API token'

      it 'returns okay' do
        #NEW!
        action
        expect{actual_status}.to eq expected_status
      end

      it 'has the right output' do
        #NEW!
        action
        expect{actual_output}.to eq expected_output
      end
    end
  end
end

And here are the shared examples:

shared_examples 'authorize API token' do
  describe 'with no API token' do
    before do
      remove_header :api_token
    end

    it 'returns unauthorized status' do
      # NEW!
      action
      expect(actual_status).to eq unauthorized_status_code
    end
  end

  describe 'with invalid API token' do
    before do
      set_header :api_token, 'AnInvalidApiTokenHere'
    end

    it 'returns unauthorized status' do
      # NEW!
      action
      expect(actual_status).to eq unauthorized_status_code
    end
  end
end

We must remember to manually call the action for each assertion, but we gain the benefit of greater flexibility. This is important for the tests we write now, and also those we’ll write in the future.

Oh yeah, since we kept things simple and reduced astonishment, our specs run like we expect them to. Now our tests are back to green!

Manually Update Postgres Timestamps In Rails

Let’s say I need to set a boolean field on 100,000 records at the same time. And I’m using Rails 4.x, just to clarify.

The simple way to do this is

unprocessed_records.each do |unprocessed_record|
  unprocessed_record.processed = true
  unprocessed_record.save!
end

But this results in 100,000 ActiveRecord objects instantiated and held in memory at the same time. Since all I’m doing is updating an attribute, this is not efficient in memory or time.

A more efficient way to update all these records at once is #update_all.

unprocessed_records.update_all(processed: true)

The benefit is that it doesn’t create an ActiveRecord instance per database record. It also skips validations and callbacks from the model, so the database update is faster.

One thing ActiveRecord does give us is the timestamps. Each time we #save a record, its updated_at is modified. Using #update_all, we lose that. In the case of a database migration, we might not want the timestamps updated, but, in this use case, we certainly do.

Additionally, #update_all inserts values into the database without passing them through ActiveRecord’s typecasting. So it’s important to use the correct value, since ActiveRecord won’t do anything magical for us.

So how can we use #update_all and still ensure the timestamps are kept current?

In searching, I came across this StackOverflow question from 2013. The accepted answer mentions that you can pass #update_all a value of updated_at equal to DateTime.now. But I wanted to be sure this solution would still work today.

In the Rails console, I found the value which would be put into the database.

> DateTime.now.to_s
=> "2015-04-27T18:20:28-06:00"

Next, I had to check whether this value could be used for our column type, since ActiveRecord won’t be typecasting it.

My database is Postgres, and for created_at and updated_at we use the column type timestamp [without time zone].

The output in the Rails console above clearly has the timezone information, so I needed to check whether that’s acceptable for Postgres or I’d have to munge the value.

Postgres’ documentation on date/time types has the following:

Note: The SQL standard requires that writing just timestamp be equivalent to timestamp without time zone, and PostgreSQL honors that behavior. timestamptz is accepted as an abbreviation for timestamp with time zone; this is a PostgreSQL extension.

This lead me to think I could use the value of DateTime.now.to_s and be fine. I tested it out from the command line, and verified it worked. The timestamp had the correct value.


So, when you need to manually update timestamps in Postgres using Rails, use DateTime.now.to_s.

Keep this in mind when you need to update many records along with their timestamps. We don’t get all of Rails’ magic, but it will save the server a lot of time and memory.


Update: My co-worker Zac McCormick asked a good question. If we need to store the time as UTC, does Postgres convert non-UTC times to UTC time before it strips the timezone?

The answer appears to be no. This SO answer tells us timestamp [without time zone ignores a timezone modifier if we should add one. The time zone is assumed to be that of the time zone setting.

If your server’s timezome is not set to UTC, and you want to store UTC timestamps, you’ll have to convert the time yourself. This other SO question has a good answer.

DateTime.now.to_s => “2015-04-27T19:07:55-06:00″ DateTime.now.new_offset(Rational(0,24)).to_s => “2015-04-28T01:07:56+00:00″