For anyone who hasn’t seen James Mickens give a talk, you should find a way to do so. Invite him to your university, lab, office, cave, or dungeon or figure out a conference where he’ll be talking and go. It’s an experience that you don’t want to miss.
In the mean time, I’ve discovered that he’s been writing a series of amazing columns for the USENIX ;login: magazine and they will hold you through until you get a chance to seen him talk.
This is a bit late (alright, more than two months late), but the Linux Foundation did a little Q&A with me about my role as a developer in OpenDaylight. The key quote I think people should take away is this:
Grab the code and get it to do something first.
A good place to start is getting the installation guide which also walks through getting the simple forwarding application to work. There’s a few moving parts, but the documentation there is pretty good and if you need any help you should jump on #OpenDaylight channel on the freenode server and there’s almost always people willing to help out there.
To say that SDN has a lot of hype to live up to is a huge understatement. Given the hype, some are saying that SDN can’t deliver, but others—notably Nicira—are saying that network virtualization is what will actually deliver on the promises of SDN. Instead, it appears that network virtualization is the first, and presumably not the best, take at the new way of managing networks where we can finally holistically manage networks with policy and goals separated from the actual devices, be they virtual or physical, that implement them.
Out with SDN; In with Network Virtualization?
In the last few months there has been a huge amount of back and forth about SDN and network virtualization. Really, this has been going on since Nicira was acquired about a year ago and probably before that, but the message seems to have solidified recently. The core message is something like this:
SDN is old and tired; network virtualization is the new hotness.
That message—in different, but not substantially less cheeky terms—was more or less exactly the message that Bruce Davie (formerly Cisco, formerly Nicira, now VMware) gave during his talk on networking virtualization at the Open Networking Summit in April. (The talk slides are available there along with a link the video which requires a free registration.)
The talk rubbed me all the wrong ways. It sounded like, “I don’t know what this internal combustion engine can do for you, but these car things, they give you what you really want.” It’s true and there’s a point worth noting there, but the point is not that internal combustion engines (or SDN) are not that interesting.
A 5-year retrospective on SDN
Fortunately, about a month ago, Scott Shenker of UC Berkeley gave an hour-long retrospective on SDN (and OpenFlow) focusing on what they got right and wrong with the benefit of 5 year of hindsight. The talk managed to nail more or less the same set of points that Bruce’s did, but with more nuance. The whole talk is available on YouTube and it should be required watching if you’re at all interested in SDN.
The highest-order bits from Scott’s talk are:
Prior to SDN, we were missing any reasonable kind of abstraction or modularity in the control planes of our networks. Further, identifying this problem and trying to fix it is the biggest contribution of SDN.
Network virtualization is the killer app for SDN and, in fact, it is likely to be more important than SDN and may outlive SDN.
The places they got the original vision of SDN wrong, were where they either misunderstood or failed to fully carry out the abstraction and modularization of the control plane.
Once you account for the places where Scott thinks they got it wrong, you wind up coming to the conclusion that networks should consist of an “edge” implemented entirely in software where the interesting stuff happens and a “core” which is dead simple and merely routes on labels computed at the edge.
This last point is pretty controversial—and I’m not 100% sure that he argues it to my satisfaction in the talk—but I largely agree with it. In fact, I agree with it so much so that I wrote half of my PhD thesis (you can find the paper and video of the talk there) on the topic. I’ll freely admit that I didn’t have the full understanding and background that Scott does as he argues why this is the case, but I sketched out the details on how you’d build this without calling it SDN and even built a (research quality) prototype.
What is network virtualization, really?
Network virtualization isn’t so much about providing a virtual network as much as it is about providing a backward-compatible policy language for network behavior.
Anyway, that’s getting a bit afield of where we started. The thing that Scott doesn’t quite come out and say is that the way he thinks of network virtualization isn’t so much about providing a virtual network as much as it is about providing a backward-compatible policy language for network behavior.
He says that Nicira started off trying to pitch other ideas of how to specify policy, but that they had trouble. Essentially, the clients they talked to said they knew how to manage a legacy network and get the policy right there and any solution that didn’t let them leverage that knowledge was going to face a steep uphill battle.
The end result was that Nicira chose to implement an abstraction of the simplest legacy network possible: a single switch with lots of ports. This makes a lot of sense. If policy is defined in the context of a single switch, changes in the underlying topology don’t affect the policy (it’s the controller’s responsibility to keep the mappings correct) and there’s only one place to look to see the whole policy: the one switch.
The next big problems: High-level policy and composition of SDN apps
Despite this, there’s at least two big things which this model doesn’t address:
In the long run, we probably want a higher-level policy description than a switch configuration even if a single switch configuration is a whole lot better than n different ones. Scott does mention this fact during the Q&A.
While the concept of network virtualization and a network hypervisor (or a network policy language and a network policy compiler) helps with implementing a single network control problem, it doesn’t help with composing different network control programs. This composition is required if we’re really going to be able to pick and choose the best of breed hardware and software components to build our networks.
Both of these topics are actively being worked on in both the open source community (mainly via OpenDaylight) and in academic research with the Frenetic project probably being the best known and most mature of them. In particular, their recent Pyretic paper and talk took an impressive stab at how you might do this. Like Frenetic before it, they take a domain-specific language approach and assume that all applications (which are really just policy since the language is declarative), are written in that language.
Personally, I’m very interested in how many of the guarantees that the Frenetic/Pyretic approach provide can be provided by using a restricted set of API calls rather than a restricted language which all applications have to be written in. Put another way, could the careful selection of the northbound APIs provided to applications in OpenDaylight enable us to get many—or even all—of the features that these language-based approaches take. I’m not sure, but it’s certainly going to be exciting to find out.
I’ve been thinking a little more recently about how to be disruptive in the networking space and in particular in the data center networking space since that’s where I spend a lot of my intellectual cycles. One thing that we always talk about is reducing costs and in particular reducing CAPEX (capital expenditure) and OPEX (operation expenditure). Generally, there’s more discussion around reducing OPEX than CAPEX because purely software tools for simplifying management and increasing automation can improve OPEX while improving CAPEX typically happens in more complex ways over longer timescales.
When companies discuss reductions in OPEX, just remember youare OPEX most, if not all the time. Self-service and automation are great, but if that service is what you provide (and provides your income), you better do something about it. Don’t become roadkill on the path to the future.
This is even a bit more interesting because the disruptive products that we intend to sell are typically sold to IT departments. That is we’re selling the product to the people whose jobs the products most endanger.
There is a bit of a silver lining which is that automation and simpler management also appeal, very strongly, to the very same people. Nobody wants to be doing simple, menial, repetitive tasks all day and tools that cut down on such things tend to be broadly popular.
How do we reconcile these two things? On the one hand we have tools that, if they are successful, clearly make it possible for a smaller number of people to accomplish the same tasks which should reduce the number of total jobs. On the other hand, the people whose jobs are being threatened often embrace these tools. A knee-jerk reaction would be for them to oppose the tools. Why don’t they?
A simple explanation would just be that they’re short sighted and are willing to take the short-term reduction of menial work without worrying about the long-term career jeopardy. That may be a little true, but there’s also a more satisfying answer which I think has more of the truth that the blog post points out:
Virtualisation in the server space didn’t lead to a radical or even a slow loss of roles that I’m aware of; if anything more are required to handle the endless sprawl. Perhaps the same will happen in networking?
Jobs (and entire skill-sets with them) will be lost, but the removal of the pain associated with networking will increase its use. Along with general market growth, this may absorb those affected and history shows we’re all mostly resilient and adaptable to change. Of course, there are always casualties; some can’t or won’t want to change their thinking and their skills in order to reposition themselves.
This resonates with me. It also reminds me of comments that James Hamilton of Amazon AWS fame made during a talk he gave while I was at UW. Essentially, his point was that for every cost reduction they made in AWS, it increased the set of things people wanted to do on AWS more than it decreased profits. In other words, making computing—and networking specifically—more consumable and cheaper will result in there being more computing, not less.
That’s not to say that strictly everyone will be better off, but just that there’s likely to not be some huge collapse in IT and networking jobs as we do a better job of automating things. At least not in the near future.
I really try to be the calm rational one when it comes to accusations of massive surveillance and people saying that the government can “hear, see and read everything,” but this seemingly casual remark seems to be the last in a string of information that points directly to the US government trying, and succeeding in many cases, to record all communication so that they can go back to it later if they so choose.
As the article points out there have been congressmen, NSA employyes and private company (AT&T) employees all trying to blow this whistle. The article doesn’t even mention the massive NSA data center being built in Utah.
I guess it’s time to adjust pur expectations even as we try our best to push for transparency, regulations and change.
My official day job is as a computer systems researcher at IBM Research, but I really like to build things and, when possible, push them into the real world. I’ve had the incredible good luck to land in a spot where I’m able to do just that. Last year, I helped ship IBM’s first in-house OpenFlow controller. This year, I’m hoping to do quite a bit more than that. Yesterday was the public launch of OpenDaylight and now I can talk about a bunch of the things I’ve been working on in one way or another since December.
There’s been lots of coverage all over the web about OpenDaylight’s launch today, but I’ll just give the link to the Wall Street Journal’s coverage because I think it describes the details and context better than a lot of the other coverage.
OpenDaylight is really exciting. It is a Software-Defined Network (SDN) controller backed by real promises—both money and people—from pretty much every big name in the networking industry. At the same time it’s a project being run by the Linux Foundation and managed by people who have real reputations for doing open source well and for keeping projects truly open. With OpenDaylight, even individual contributors are encouraged participate and contribute freely.
If things go the way I hope they will, OpenDaylight will provide a common platform for companies to build solutions, researchers to implement new ideas and for people to learn and teach about SDN and networking in general. It really has the potential to be the Linux of networking in a way that I don’t think anyone thought was possible six months ago.
Lest the people who know me think that I’ve been somehow brainwashed, there are obviously still challenges and rough edges in the project. Just to name a few: There’s no magic bullet for interoperability; right now the controller relies solely on OpenFlow 1.0 to control the network. The code is pretty “enterprisey” using OSGi bundles and and some pretty heavyweight design patterns. The process of getting the code down and loaded into Eclipse is more complex than I’d like. Despite that, if we as an Industry and community can actually use this as a focal point, we can fix these issues in short order.
Really, my hope is that those of you who are interested take this post as a call to action. If you’re a networking researcher and looking for a platform to build an SDN application, consider using OpenDaylight. If you’re interested in what this whole SDN thing is, consider playing around with OpenDaylight. If you’re thinking about trying to actually build an SDN application, definitely try working with OpenDaylight first.
If all of that doesn’t convince you, then here’s a promotional video that contains unicorns, rainbows, bacon and penguins:
A while ago I wrote that I thought Apple’s major failing is not realizing that we’re heading for a post-device world where the devices we use in the future will become a lot like the apps we use today. That is, with a few exceptions, more or less impulse buys rather than painstakingly selected profitable objects.
It seems as though some of this new world is happening faster than I would have thought. In a recent article at VentureBeat, a financial analyst who’s spent a bunch of time in China over the last 10+ years describes his shock at finding fully-capable 7″ Android tablets running Ice Cream Sandwich for sale for $45. They’re apparently called A-Pads.
The truth is that if your company sells hardware today, your business model is essentially over. No one can make money selling hardware when faced with the cold hard truth of a $45 computer that is already shipping in volume.
My contacts in the supply chain tell me they expect these devices to ship 20 million to 40 million units this year. Most of these designs are powered by a processor from a company that is not known outside China — All Winner. As a result, we have heard the tablets referred to as “A-Pads.”
When I show this tablet to people in the industry, they have universally shared my shock. And then they always ask “Who made it?.” My stock answer is “Who cares?” But the truth of it is that I do not know. There was no brand on the box or on the device. I have combed some of the internal documentation and cannot find an answer. This is how far the Shenzhen electronics complex has evolved. The hardware maker literally does not matter. Contract manufacturers can download a reference design from the chip maker and build to suit customer orders.
He goes on to draw the scary, but straightforward conclusion that:
No one can make money selling hardware anymore. The only way to make money with hardware is to sell something else and get consumers to pay for the whole device and experience.
So, companies like Apple can stay around if they can add enough extra things to demand a higher price for their hardware. Apple in particular has an advantage because they have enough money that they actually fund the creation of new fabs in exchange for getting the best hardware before everyone else, but it seems that will likely fade some too. He mentions that product cycles are getting shorter and so competitive advantages like Apple funding fabs are likely to last shorter and shorter times.
As product cycles tighten (and we had quotes for 40-day turnaround times), the supplier with the right technology, available right now will benefit.
It seems to me like the right option is to admit that your hardware business is likely to be undercut in most areas and to instead focus on software and integration and move up the stack to where there’s still real value. That being said, this is exactly the kind of thing that the Innovator’s Dilemma says is nearly impossible for companies to do.
This whole recent article on “The Cheapest Generation” does a lot of talking about how young people’s buying habits have been changing. Specifically, they’re buying fewer cars and houses and even when they buy them, they’re going with smaller and cheaper ones. Some of that can be chalked up to the recent economic collapse, but the article argues that even the collapse doesn’t quite explain it all.
In any event, the article has one line which explains so much of the current generation in a single, short sentence that it’s stuck with me ever since I read it.
Young people prize “access over ownership,” said Sheryl Connelly, head of global consumer trends at Ford.
It explains the transition from car ownership to Zipcar. It explains the shift from buying music to subscription services like Pandora and Spotify. It also explains why people are mostly happy to jump to the cloud—where they don’t own their data, but do have access to it. It even explains a bunch of the mentality of Facebook—focusing on providing the cleanest simplest way to give people access to their lives even as they give up control of things.
From our experience, a $2.99 app in the App Store needs to hover around #250 in the top paid list to sustain two people working full-time on the app.
This doesn’t work. You need to be able to have more than ~500 developers full time making apps. A lot more.
The worst part of all of this isn’t actually the fact that the app store is unsustainable—that’s fine, the app store can fail or people can raise prices. The real problem is that in the process of failing, the app store is redefining what people think software is worth. If we’ve permanently changed people’s valuation from software down from $30–50 to $1–5 that’s going to really hurt software development for some time to come.
For the last while I’ve had this unsettling feeling that while there are a lot of startups going around, most of them aren’t really innovating that much. Most notably, I’m tired of the idea that innovation can come from companies busy basically just trying to make apps that serve the smallest little quirk of what we want rings hollow to me.
If your core innovation is just an iPhone app, then it’s not clear that you’ve really had the impact you wanted to have. There are obvious exceptions like VizWiz which a friend of mine (now a professor at the University of Rochester) wrote that allows blind people to take a picture, speak a question about the the picture and get a response back within seconds allowing the vision impaired to “see” via a proxy. But, in general, your new startup about hyper-local, crowdsourced, social, location-based iPhone app is probably not going to change the world.
The always great Planet Money Podcast reminded me of this with their recent episode The Cool Kids Don’t Want To Go Public. They explored the fact that a lot of the new companies that people are starting aren’t really sustained business models but rather a elaborate courtesan dance to seduce the princely giants of the tech world.
Hammerbacher looked around Silicon Valley at companies like his own, Google, and Twitter, and saw his peers wasting their talents. “The best minds of my generation are thinking about how to make people click ads,” he says. “That sucks.”
The Planet Money Podcast also makes the more nuanced point that there are people who choose not to take their companies public because they don’t need the extra money to grow and they don’t want the extra oversight that’s required. Still, it seems like we could use more people trying to build companies than can last decades rather than companies looking for the early out.