New music from Lemaitre this week
And another I missed from March
New music from Lemaitre this week
And another I missed from March
This section of the blog will have daily updates with articles, podcasts, videos, and anything I find interesting. The content will be brief, just a quote and my reaction most of the time, but in higher quantity than my other posts.
I wanted an outlet for all the things I learn about in a day. Instapaper, Feedly, and Pocket Casts are great for follow interesting creators, but it’s difficult to go back and see what I was reading, listening to, watching, thinking about on any given day.
The format is based on Daring Fireball (and I’m sure many others), but links and quotes will be here in their own feed to separate my own work from my reactions to the work of others.
Cal Newport said “I support the social internet. I’m incredibly wary of social media.” I tend to agree, so this is my take. A news feed that I control. No algorithms or trending topics.
If you want to see the inner working of the blog or follow what I follow, this is the place. If not, regularly scheduled thoughts will continue once a weekish.
Here’s the RSS feed: https://ryancropp.blog/category/news-feed/feed/
But that doesn’t mean everything from the last week was old information.
An important point Zuckerberg reiterated is that Facebook does not sell user data. This would be a silly business move because Facebook’s value to advertisers is in the uniqueness of its data. It is in Facebook’s best interests to keep it’s trove of data secure, as it requires advertisers to keep coming back. There’s no other place advertisers can go to get the same level of targeting.
Instead of selling data, Facebook actually collects all the details from every person “in the community” and compiles the best advertising opportunity for a given ad. Facebook assures advertisers their ad placement will reach the intended audience with the greatest possibility of interaction. It is this assurance that gives Facebook it’s gazillion dollar market cap.
The Cambridge Analytica case was different, but still Facebook never sold data. Instead, Cambridge Analytica got raw Facebook user data from an app developer who used a survey app to harvest data. In 2014, it was within Facebook terms for a 3rd party app developer to use the Facebook developer platform to collect just about all the information about you and all your friends ever entered onto the site.
This is why the current Facebook fiasco is not a data security breach, but a data privacy leak. Hackers did not break into Facebook systems to obtain user data, but a developer (which could have been anyone) used Facebook sanctioned tools to collect your information. Facebook has since locked down it’s platform to prevent such unrestricted access to user data, but it does not change the fact that massive amounts of user data left the platform seemingly without consent of its users. And yes, it’s true that by signing up you agreed to the terms that allowed developers to leverage the wide open API to gather profile information, but did you really know that was part of the agreement?
Did you check if your info was collected by Cambridge Analytica? Go ahead, I’ll wait ⌚😊
After you’ve read through your activity log and exported your data, take a minute and think about what stands out from the content (I think this tinfoil hat scandal is all a ploy to get us to go on Facebook even more. Feel free to finish reading in the meantime, the export takes a while). Once you get to the details, you can see the majority of the information came from you, but there is a small subset which reveals the inner working of the Facebook machine.
To put things in perspective, focus on your ad preferences and take a look at your ad demographics information. This is a window to the
96 98 categories from the Senate hearing. Advertiser demographic is the result of running all our interactions on Facebook through a proprietary algorithm. Of all the information in the data archive, this piece is novel. We didn’t explicitly tell Facebook this information, but they determined it based on what we’ve done on the site.
This is why the Facebook hearing this week is only the tip of the iceberg. If we are concerned that Cambridge Analytica could sway an election with a slice of our data, what kind of power does Facebook have? Sure we didn’t entrust Cambridge Analytica with our data, but why does opting into a puppy video sharing service change our perception of possible psychological manipulation?
We need greater transparency on how our data is used. I can control and know what I upload, but what happens with the data “I own” once it’s handed over?
When I upload a photo to Facebook, what algorithms are tuned as a result? How does the content of the photo affect ads I see?
WhatsApp communication is encrypted, so it’s private between those in the conversation, but in what way does Facebook link my WhatsApp, Instagram, Facebook accounts? I’ve logged into all three on the same device so they must know it’s the same person (even though I signed up for all three as separate users).
And what about activity coming from the same IP address or GPS location? Does Facebook correlate data of those physically closest to me, outside of our connections on it’s services? What about when I’m on Facebook but signed out?
The consumer facing fun part seems like a front for the stingy advertising business on the back end. What is the difference between the two? It’s telling that Zuckerberg doesn’t fully understand the difference (from questioning by Brian Schatz). From Facebook’s perspective, the “fun part” is the user feature set that drives advertising revenue. It’s the top of the funnel for all of Facebook’s algorithms and drives the companies valuation.
For a platform that relies on its users to generate value, the company doesn’t provide much information to said users on how the internal cogs work. Perhaps it’s best to be blissfully unaware, or maybe it’s not a requirement, but when 2 billion people feel like the product and not the customer, it’s reasonable for them to want a little more information on how they’re being used.
Check permissions when using Facebook (or Google or any over service) to sign up for a new site. To keep the same convenience, sign up for a password manager like Dashlane or LastPass which can generate and remember a new login for each site you visit. This adds a layer of security to your accounts and removes the possibility of another Cambridge Analytica style data leak.
Use a separate browser just for Facebook. Only log in to Facebook on that browser and do all your other web stuff in another. Or use extensions like Ghostery (which also tracks your trackers, so maybe just turn off the internet for the day…) or the Facebook Container for Firefox.
Video of Zuckerberg’s Senate hearing (transcript) and appearance before House committee (transcript)
Day 2 from MIT Technology Review
What was Facebook Thinking by James Allworth
The Facebook Current and The Facebook Brand from Stratechery
Facebook and Cambridge Analytica Explained from NYTimes
Facebook’s Real Mistake and Facebook Fatigue from Exponent Podcast
Mark Zuckerberg is Either Ignorant or Deliberately Misleading Congress from The Intercept
What is GDPR?
General Data Protection Regulation
Coachella streams 1, 2, and 3
In his recent posts, Cal Newport outlines why our attention will benefit from individuals owing their own domains. We may need tools to help us do it, but companies will assist us from behind the scenes allowing us to build our own brands. People should be able to move their brand (and data) from one platform to another when improvements come along. This is the social internet, and it will power the economy of the future. Value online comes from those who create it. All we can do as technologists is empower others to make their art with greater efficiency.
Does Jaron Lanier follow blogs? Where does he get his news? How does he learn about Meltdown/Spectre?
Word of mouth was the original form of communication. Before there were books, people could only tell stories to share information. The collective hive mind of civilization would do their best to spread knowledge equally from one person to the next. Verification of stories could only be carried out collectively as groups of people could ensure what they believe was true. One could add Individual color a story to show creativity but ultimately lead to deviations from the original idea.
Fast forward to books. Once we mastered the skill of preserving information in physical objects the amount of our collective knowledge exploded. We could remember things across generations, and even without coming into contact with the person or people who first transcribed their ideas. We could pull from philosophers, physicists, mathematicians, composers, playwrights, and doctors to develop deeper ideas and advance our understanding.
What is the hive mind in the world of books? It was still among the people reading these works who pulled from their own experiences and created their own interpretations. Remixing their learnings into new forms of intelligence.
Today is another progression. We go beyond having all the knowledge in the world documented and at our fingertips, to peering into the minds of everyone on the internet. Social media, like Twitter, amplify ideas only for an instant, as the next thing comes along and yanks at our attention.
(This is no different than before when we spread stories across the world, or documented our understanding of nature.)
We go where the thought leaders go. And when rapid reactions and quick wit are incentivized, we miss out on the deep thinking required to keep progressing forward. As Lanier mentions, where are all the Woodward and Bernstein’s these days? Deep investigative journalism is becoming a thing of the past. Instead our big stories take the form of the aggregate. Pulling the voices of all perspectives involved. And taking down multiple people.
Continuing the thought, how can one create new ideas and seek blue oceans? Part of the success of the web (an any technology for that matter) is the externalities spawning new industries out of the original innovation. Like cryptography, it takes a lot of work to come up with a solution, but once public, the idea is easily verified. It’s the “why didn’t I think of that moment” you get when watching Shark Tank.
So how can we do it? Why is music from 90s and 00s so similar in sound? Are we bound to digitally rehash all of history? To find out, lets think about some of the new ideas stemming from web 2.0.
Well one more digression. To do so, lets start with some digital rehashes: Airbnb -> hotels. Lyft -> taxis. Wikipedia -> Encyclopedia. Ebay -> Thrift store. Amazon ->
Bookstore, grocerystore, restauraunt brick and mortar. These are all hugely successful companies that replaced what existed before. I think what’s missing from Lanier’s manifesto is the added value web 2.0 tech brings to previous implementations. However, he does highlight what’s lost in the transformation. (there is more to talk about here, but I’m getting off track)
What is new thought. Is a review of a book just adding to the noise? How can we ever learn if we do not discuss our thoughts and opinions with others? There is value to rehashing work if the idea can stand for something greater. A new version of Unix? Ok sure. But openly available for all to improve and understand? This is novel and moves society forward. Lanier is concerned with the side effects of open culture and I agree with him on the aspect of sustainability (via employment, how do you make a living working on open source?), but how do we build cathedrals if we don’t have the tools?
Part of Lanier’s concern stems from the abstraction of humankind. Kevin Kelly’s one book theory, for example. And it is important to maintain human individuality and creativity. So how do we keep from abstracting the person behind the creation as we move to an aggregated world? People no longer know which studio produces a movie or TV show, unless it’s from Netflix. Netflix advertises their creations, and everyone else’s are abstracted to a title, image, and caption.
There is a recent episode of the Ezra Klein Show with Lanier. Instigated by the release of Lanier’s new book, the two discuss all sorts of things including VR, music, Facebook, blogs, and podcasts. The most intriguing thread was on the topic of social media’s influence in collapsing context of the things people create. They didn’t know who coined the term, but it seems to have been either danah boyd or Michael Wesch (see below), although it might as well have been Lanier.
The basic idea is this (as nicely described by Joel on Software):
Here’s what happened with the 140 characters. You would start out having some kind of complicated thought. “Ya know, dogs are great and all? I love dogs! But sometimes they can be a little bit too friendly. They can get excited and jump on little kids and scare the bejesus out of them. They wag their tails so hard they knock things over. (PS not Huskies! Huskies are the cats of the dog world!)”
Ok, so now you try to post that on Twitter. And you edit and edit and you finally get it down to something that fits: “Dogs can be too friendly!”
All the nuance is lost. And this is where things go wrong. “@spolsky what about huskies? #dontforgethuskies”
Ten minutes later, “Boycott @stackoverflow. @spolsky proves again that tech bros hate huskies. #shame”
By the time you get off the plane in Africa you’re on the international pariah list and your @replies are full of people accusing you of throwing puppies out of moving cars for profit.
The context for Joel’s thought is his decision to give up Facebook and Twitter for 2018. (Isn’t it odd how things come in threes? Reading Lanier, Context Collapse, Joel on Facebook & Twitter). His reasons for doing so are exactly what Klein and Lanier discuss in the podcast. You just lose the human connection when everything we say and do is mashed up, chomped into a sound bite, and thrown around far outside the initial context for the idea.
And I realize I’m constantly doing that know. I haven’t quite figured out how to include quotes and references to others when developing new thoughts and creating new things. I have to keep exploring. Which leads to…
A quick search lead me to this post from danah boyd on coining context collapse. danah boyd talked about the term back in 2013, referencing her thesis from 2002. So the idea, while not new, was new to me. This topic is a rabbit hole, and I have just scratched the surface. I need to go off to read, watch, and listen. I will return soon.
I’ve fallen victim just now. I scoured the web for an hour following links to uncover new and interesting things to read. Then I took it all out of the context I was in and distilled my findings into a nice tidy list. I’m grappling with how the onslaught of Ben Thompson’s Aggregation Theory can mesh with avoiding context collapse via boyd/Lanier (the three should do a podcast together). Does pulling together sources and finding key themes inherently strip the human side of what people create? Or are we bound to keep mashing up ideas. Certainly all new things come from the history that preceded, but how do we balance this growing from this influence with remembering where we came from?
To do research, you take all the mind space of the internet open 100 tabs, make some progress, then save it across all services to pick up again tomorrow. Just with this topic alone, I scattered material to YouTube, Kindle, Instapaper, iBooks, and OneNote. What in the world!? How do people keep any semblance of a train of thought when the best technologies are designed to keep us stretched in multiple directions. Where does the context remain after distilling your work into buckets and silos? This frustrates me, With all the learning one can do on the internet, why is it so unnatural and inhuman? What if the internet was set up more like college, where thoughts and ideas are shared amongst new learners and experts, instead of like a kindergarten classroom where things may be haphazardly thrown everywhere with no sense of where they came from?
There is more to this thread, but I need to dig deeper. I have my materials and my thoughts. Now I just need to stay focused. Keep my mental state and remember the context of where it all began.
First of all, Jetpack support is amazing. Automattic is known for its customer service oriented culture, and it shows. I was running into an issue where Jetpack would not connect to my site, so I reached out to their support team. They were responsive in helping me figure out the tech at all hours of the day, and they even researched how to solve a problem with a non-Automattic product. Great stuff, I appreciate it!
Here’s the link if you need help with Jetpack.
The first issue has been with the site since day one. For custom WordPress installs, the WordPress Address and Site Address URLs should be the same (both set to https://ryancropp.com in this case) no matter what they say:
Site Address (URL):
Enter the address here if you want your site home page to be different from your WordPress installation directory.
Just don’t try to manually update WordPress and Site address to your custom domain from wp-admin dashboard. You will get locked out.
To fix the issue you need to FTP into your site and update the siteurl in the functions.php file for your installed theme:
Refresh WordPress admin and then remove the
Just for good measure, clear the Project Nami blob cache so no old site configurations are left hanging around. The instructions are in the readme of the Blob-cache download (why!?).
They’re kind of fun, but how are these still a thing? I guess we have Unix to thank. I need to use them 0 0 0 0 0 ? 2018/2 or 0 0 0 0 0 ? 2018/3 at best. Here are some docs from Oracle and Quartz to figure out what that means.
Turns out everything up to this point had nothing to do with getting Jetpack to work. It certainly didn’t hurt, but attempting to link Jetpack still showed the error “Verification secrets not found”.
On a whim I decided to look into the compatibility issues with Jetpack and Project Nami, the caching mechanism for WordPress on Azure. And what do you know, Issue #237 on the Project Nami GitHub had the answer.
One should now be able to solve the issue by adding the following to the site’s wp-config.php:
define( ‘JETPACK_DISABLE_RAW_OPTIONS’, true );
See Automattic/jetpack#7875 for more info.
So finally, if you’re following along at home, disable Jetpack raw options for Project Nami…
And it works!
You can sign up for email subscriptions in the sidebar.
Turning off browser extensions may or may not have helped. I turned off Ghostery in the middle of the process, forgot about it, then realized it was still off some time later.
I didn’t read as many books in 2017 as I did in 2016, but I still learned a lot from what I read this year. I did read Deep Work, and, with Klein, would highly recommend it. Titan by Ron Chernow was a brick of a good book and I would expect nothing less from Grant (in terms of both length and quality). And How to Get Filthy Rich in Rising Asia: A Novel by Mohsin Hamid was pretty good (even if I read it in 2016).
To start 2018, I’m reading You Are Not a Gadget by Jaron Lanier and Benjamin Franklin by Walter Isaacson. Lanier probably wouldn’t approve of this type of post (we should go for evergreen content instead of rehashing previous work), but I found it funny that so many people had these Best books posts. So this is mine! Looking forward, not back, to take what we learned in the past months and apply it to the present and future.
Here’s to another year of great books and learning.
Some notable repeats are Pachinko (Klein and NYTimes), Grant (NYTimes and Obama), Evicted (Gates and Obama), Exit West (NYTimes and Obama). There are a lot of books out there to read in 2018. If you are looking for something, perhaps take a recommendation from the world’s thought leaders. Or, in the spirit of Lanier, go out on your own and read something no one else is talking about. In either case, keep curious.
At this point, people don’t need to upgrade their phones every two years. Phones are fast enough and the bump from the last generation A10 fusion chip to the latest A11 bionic really isn’t that important. Apple has even started added some fancy name to the end to uphold the experience of getting a new, more powerful phone. As a result, the deliberate slowdown was seen as user hostile to deceptively increase user delight when upgrading to a new phone and artificially enhancing the “this is so much smoother than my old phone” feeling. If the last iPhone started at 100% performance and degraded to 75%, the jump to 125% feels more significant.
It should go without saying that we think sudden, unexpected shutdowns are unacceptable. We don’t want any of our users to lose a call, miss taking a picture or have any other part of their iPhone experience interrupted if we can avoid it.
Apple mentions there are three contributions to battery life and performance:
As always, our team is working on ways to make the user experience even better, including improving how we manage performance and avoid unexpected shutdowns as batteries age.
As they should. Apple has always been the experience company. The Apple walled garden is carefully designed in the ethos that people don’t know what they want until you show it to them. Maybe we need a little more clarity into how Apple creates people’s preferences.
Remember this video? In it you start with a person lying on the ground looking up at the sky. The camera zooms out exponentially into space showing the immense scale of the galaxies. Then, we quickly zoom back in to molecular scale
Notice the similarity of the Cosmic Web at 1 billion light years and Quarks at 1 femtometer
I immediately recalled the Cosmic eye video when I read this headline:
The article is written by an astrophysicist and a neuroscientist on the similar complexities and structures of the brain and the cosmic web.
The task of comparing brains and clusters of galaxies is a difficult one. For one thing it requires dealing with data obtained in drastically different ways: telescopes and numerical simulations on the one hand, electron microscopy, immunohistochemistry, and functional magnetic resonance on the other.
It also requires us to consider enormously different scales: The entirety of the cosmic web—the large-scale structure traced out by all of the universe’s galaxies—extends over at least a few tens of billions of light-years. This is 27 orders of magnitude larger than the human brain. Plus, one of these galaxies is home to billions of actual brains. If the cosmic web is at least as complex as any of its constituent parts, we might naively conclude that it must be at least as complex as the brain.
Complexity of the brain and cosmos “can be quantified by counting how many bits of information are necessary for building the smallest possible computer program that can … predict its behavior.” We do this using an equally fascinating measurement tool: Computers. “independent studies have concluded that the total memory capacity of the adult human brain should be around 2.5 petabytes, not far from the 1-10 petabyte range estimated for the cosmic web!” Petabytes are huge. This means the amount of information needed to predict the behavior of a human brain is roughly equivalent to the amount of data required to stream the first two seasons of Stranger Things 50 thousand times.
Does this fact tell us something profound about the physics of emergent phenomena in the two systems? Maybe. But we must take these findings with a grain of salt. Our analysis has been limited to small samples taken with very different measurement techniques.
But it’s fun to think about.
My Instapaper reading list was piling up. Nearly half of the articles were from Stratechery, so I decided to knock them all out at once (well, over the course of a day or two).
From Weinstein and movies to the NYTimes and YouTube
In a world where the default news source is the Facebook News Feed, the New York Times is breaking out of the inevitable modularization and commodification entailed in supplying the “news” to the feed. That, in turn, requires building a direct relationship with customers: they are the ones in charge, not the gatekeepers of old — even they must now go direct.
YouTube produces an astounding amount of fame.
YouTube represents something else that is just as important: the complete lack of gatekeepers. Google CEO Sundar Pichai said on an earnings’ call earlier this year that “Every single day, over 1,000 creators reached the milestone of having 1,000 channel subscribers.” That is an astounding number in its own right; what is even more remarkable is that while Hollywood has only ~3,500 acting slots a year (including all movies, not just major studios), YouTube creates 100 times as many “stars” over the same time period.
Did he say 330 million?
Requiring Facebook to offer its social graph to any would-be competitor as a condition of acquiring tbh would be a good outcome; unfortunately, it is perhaps the most unlikely, given the FTC’s commitment to unfettered privacy (without a consideration of the impact on competition).
existing customers were increasing spend by more than the revenue lost by those leaving
The most famous example of an ISP acting badly was a company called Madison River Communication which, in 2005, blocked ports used for Voice over Internet Protocol (VoIP) services, presumably to prop up their own alternative; it remains the canonical violation of net neutrality. It was also a short-lived one: Vonage quickly complained to the FCC, which quickly obtained a consent decree that included a nominal fine and guarantee from Madison River Communications that they would not block such services again. They did not, and no other ISP has tried to do the same; the reasoning is straightforward: foreclosing a service that competes with an ISP’s own service is a clear antitrust violation. In other words, there are already regulations in place to deal with this behavior, and the limited evidence we have suggests it works.
The equation is straightforward: there is wide consensus amongst economists of all political stripes that regulation imposes costs on both innovation and society through regulatory capture; I would prefer to avoid bearing that cost until we are certain it is necessary, particularly since the evidence to date suggests after-the-fact regulation is working.
The question that must be grappled with, though, is whether or not the Internet is “done.” By that I mean that today’s bandwidth is all we all never need, which means we can risk chilling investment through prophylactic regulation and the elimination of price signals that may spur infrastructure build-out (that being the elimination of paid prioritization).
If we are “done”, then the potential harm of a Title II reclassification is much lower; sure, ISPs will have to do more paperwork, but honestly, they’re just a bunch of mean monopolists anyways, right? Best to get laws in place to preserve what we have.
But what if we aren’t done? What if virtual reality with dual 8k displays actually becomes something meaningful? What if those imagined remote medicine applications are actually developed? What if the Internet of Things moves beyond this messy experimentation phase and into real-time value generation, not just in the home but in all kinds of unimagined commercial applications? I certainly hope we will have the bandwidth to support all of that!
The problem with regulating broadband in this way, though, is that the definition of acceptable broadband is much more of a moving target. As Marc Andreessen memorably put it on Twitter:
@mattyglesias @binarybits Because sewers and electricity are far more static markets than broadband. You don’t shit 10x as much every 3 yrs.
— Marc Andreessen (@pmarca) February 23, 2014
Documenting why and how these platforms have power has, in many respects, been the ultimate theme of Stratechery over the last four-and-a-half year: this is a call to exercise it, in part, and a request to not, in another. There is a line: what is broadly deemed unacceptable, and what is still under dispute; the responsibility of these new powers that be is to actively search out the former, and keep their hands — and algorithms and policies — off the latter. Said French Revolution offers hints at fates if this all goes wrong.
This is a remarkable look at how Disney could leverage 21st Century Fox to compete against Netflix in the years ahead. One of the most insightful articles with a clear line of how we could get to a future where Netflix and Disney are massive content aggregators.
The best sort of acquisitions, though, are best described by the famous Wayne Gretzky admonition, “Skate to where the puck is going, not where it has been”; these are acquisitions that don’t necessarily make perfect sense in the present but place the acquirer in a far better position going forward: think Google and YouTube, Facebook and Instagram, or Disney’s own acquisition of Capital Cities (which included ESPN).
The problem now is obvious: Netflix wasn’t simply a customer for Disney’s content, the company was also a competitor for Disney’s far more important and lucrative customer — cable TV. And, over the next five years, as more and more cable TV customers either cut the cord or, more critically, never got cable in the first place, happy to let Netflix fulfill their TV needs, Disney was facing declines in a business it assumed would grow forever.
… differentiated content is Disney’s core competency, as demonstrated by its ability to extract profits from cable companies.
Consider the comparison in terms of BATNA (Best Alternative to a Negotiated Agreement): for distributors the alternative to not carrying ESPN was losing a huge number of customers who cared about seeing live sports; that’s not much of an alternative! Netflix, on the other hand, can — and is! — going straight to creators for content that viewers can watch instead of whatever Disney may choose to withhold if Netflix’s price is unsatisfactory.
Clearly it’s working: Netflix isn’t simply adding customers, it is raising prices at the same time, the surest sign of market power.
Therefore, the only way for Disney to avoid commoditization is to itself go vertical and connect directly with customers
Will it go through?
If one starts with a static view of the world as it is at the end of 2017, then there may be some minor antitrust concerns, but probably nothing that would stop the deal. Disney might have to divest a cable channel or two (the company’s power over distributors would be even stronger; basically the opposite of the some of the concerns that halted the Comcast acquisition of Time Warner), and potentially be limited in its ability to make operational decisions about Hulu (Disney would have a controlling stake after the merger; Comcast was similarly restricted after acquiring NBC Universal, but there the concern was more about Comcast’s conflict of interest with regards to its cable TV business competing with Hulu). The Hulu point is interesting in its own right: Disney could choose to focus its streaming efforts there instead of building its own service, but I suspect it would rather own it all.
That’s it for now. Keep reading. Keep connecting.
I watched a few QCon videos on the InfoQ YouTube channel. The presentations are from conferences over the last year or so, but the videos were all uploaded within the last month.
I’m pretty sure this is a re-upload, but it caught my eye again. (Here’s another great session by Josh Evans Mastering Chaos – A Netflix Guide to Microservices) Josh Evans begins the talk with with focus on how the organizational structure at Netflix led to internal struggle and dictated the engineering process. Tribalism and the expression of Conway’s law meant the way Netflix shipped code mirrored the team hierarchy. It wasn’t until upper management got involved that the teams sorted out their differences and began to put the architecture before the org structure.
These quotes stood out to me:
Organizational Scalability: The ability for an organization to easily add people and domain responsibilities in response to increased work and complexity. The ease with which an organization or team can adapt to shifts in business strategy
For an organization to grow, the culture must be able to adapt to changes and fluctuations in daily tasks. Netflix grew from prioritizing DVD through the mail to online streaming, two starkly different business models. Looking back at the change, it is easy to place explainations on how things went, but what seemed to be constant is the engineering culture. As Evans mentions:
We have a culture of creativity and self discipline, freedom and responsibility.
Once defined, culture is not easily changed, but setting the right culture from the beginning is crucial to and validated by success.
If you get a chance, be sure to watch (or at least listen) to the video.
This session with Rob Witoff at Coinbase from March 6, 2017 details how the startup is growing its technology with the interest in cryptocurrencies. The nine month distance in timing gives more perspective in light of recent events. Things are constantly evolving with Bitcoin, but just look at the comments relevant to the week the video was published on November 30, 2017. While cryptocurrency is its own fascinating discussion, the engineering culture at Coinbase is of paramount importance to the company and worth investigating.
Engineering Velocity requires tools and guardrails to empower engineers to work without fear.
And what is the Coinbase engineering velocity? Devs deploy an average of 4 times per week and 16 times a month. This rate of code movement requires heavy investment in testing and tools to ensure changes are good. Not every deploy succeeds (by design) and each failure is an opportunity to improve the product. Bugs in successful deployments is an opportunity to improve the deployment pipeline, and catch similar errors in the future.
Coinbase avoids a culture of blame to ensure people have the freedom to learn and grow. The engineering systems support this ideal and allow the company to scale. People like to compare market caps of Bitcoin and publicly traded companies, and we’ll have to see if the culture at Coinbase allows it to scale similar to Netflix
Again, I won’t spoil it all, as it’s worth the watch.
The InfoQ youtube channel used to be NewCircle. NewCircle specialized in tech training videos, and InfoQ has a similar focus. I originally subscribed to the NewCircle channel, and when the channel switched over to InfoQ I was happily surprised by the new QCon session videos.