Blockchain and Bearer Tokens

Bitcoin keychain/keyring and key

One of the problems with most substitutes for email is that they fail to implement a concept that Marc Stiegler of HP writes about in his technical report Rich Sharing for the Web. Narc outlines six features of rich sharing that are captured in this short scenario:

Alice, in a race to her next meeting, turns thunder-struck to Bob and says, “Bob, I just remembered I need to get my daughter Carol’s car to Dave’s repair shop. I’ve got to go to this meeting. Can you take Carol’s car over there?”

Marc's thesis is that email has held on so long because it's one of the few systems that supports all six features of rich sharing. But that's not really what this post is about, so I won't describe that further. You can read Marc's excellent white paper from the link above if that interests you.

As I was contemplating this scenario this morning, I was thinking that part of what makes this work is the idea of bearer tokens. A car key is essentially a bearer token. Anyone who has the key can open, start, and drive the car. Hence Carol's daughter can delegate to Carol by giving her the key, as can Carol to Bob, and then Bob to the mechanic.

The problem with bearer tokens is that they can be easily copied. If I give you the "key" to my account at Dropbox in the form of the OAuth2 bearer token that authorizes access, we both have a copy. In fact you could put it on a web site and everyone would be able to get a copy. OAuth2 bearer tokens don't work like car keys.

The key difference is that car keys are fermions, not dataions. That is, they can be in exactly one place at one time whereas bearer tokens can be in multiple places at the same time. Sure we can make a copy of the car key, but that takes work. Exchanging keys (Carol giving it to Bob) takes work. They have to arrange to meet or something else. The concept of work is critical.

One of the key features of the bitcoin blockchain is that it prevents double spending. That is, even though the data representing a coin can be in many places, only one person can spend it. And, importantly, can only spend it once. This seems like a property we'd want bearer tokens to have.

The simple concept is to put bearer tokens in a distributed ledger like the blockchain so that we only allow the current holder of the token to use it. Checking if someone is the current holder of the token is easy since everyone can have a copy of the ledger. But transferring takes work in the same way that transferring a bitcoin takes work (that's what all the "bitcoin mining" is really accomplishing).

In fact we could probably just use bitcoins as tokens. When Alice authorizes Bob to access her account on my system, I'll send a small amount of bitcoin to Bob. When Bob accesses the system, he presents the coin (note, he just has to show it to my system, not spend it or transfer it) and I can check that it's the right coin and that Bob is the current holder. If Carla presents the token (coin), I can check that she doesn't own it and refuse service.

A few thoughts:

  • There's nothing in this system to prevent Bob from transferring the coin to Carla. That is, after Alice gives the token to Bob, she has no control who he transfers it to.
  • This system costs real money. That is, bitcoins, no matter how small, have value. But that's a feature, not a bug, as the saying goes. The cost makes bearer tokens behave more like physical keys.

This is open loop thinking, but I'd appreciate your thoughts and feedback on how a system like this might work better and what problems it might solve or create.

What Happens to the Data

Silos de Trigueros

Nathan Schor pointed me at an article about Metromile that appeared in TechCrunch recently. Metromile is a per-mile insurance company that uses a OBD II device that you plug in your car. It tracks your vehicle stats, similar to Fuse, Automatic, and other connected car services.

The kicker is that it's free because Metromile is making money by selling per-mile insurance. The more users they have using their device the bigger their potential market for selling insurance. That is made evident by the fact that you can only get the free device if you live in a state where they offer insurance (currently CA, OR, and IL). Otherwise, get in line (until they come to your state, presumably).

I don't know how Metromile is implemented, but I wonder what happens to the data. I'm pretty sure they're using a cellular device (rather than Bluetooth) so that the data is always transmitted to their system even if your phone's not in the car or connected. Does all the data about every trip go to the insurance company? Or some aggregation? What's the algorithm?

These questions are relevant because it's unclear who ultimately owns this data. Users aren't paying for the device or the data, just the insurance. As I wrote in The CompuServe of Things, business models that connect devices to non-substituable services threaten to leave users with little control over the things they own and use.

I believe users ought to be customers who own the data and control where and how it's used. That doesn't mean they can't choose to share it with the insurance company, but they ought to know what's being shared and even be able to substitute one insurance company for another. If every connected car device is associated with a different insuarance company, I can't switch without having to give up access to all the data that's been collected about my car and driving.

Data silos with murky policies about data ownership are all too common. Unfortunately, they lead to a future I don't want to live in. And if you think about it, I'll bet you won't want to live there either.

A Microservice for Healing Subscriptions

Such Great Lows

Last week I wrote about how Fuse uses a microservices architecture and the benefits that such an architecture provides. This morning I was faced with a problem that the microservices approach solved handily, so I thought I'd write it up as an example.

Fuse uses webhooks (we think of them as event channels) to receive notifications that a vehicle has started or stopped, has a low battery, etc. If these subscriptions aren't set up for a vehicle nothing works. Most noticably trips aren't recorded. Alex, who's working on the Fuse app, wasn't seeing any trips from his truck. Sure enough, when I checked, it had no Fuse event subscriptions.

I could have just reinitialized Alex's truck, but I figured if it happened to Alex then it's likely to happen to other people, so I created a simple microservice (i.e. KRL rule) that checks the number of subscriptions and if it's lower than the expected number, reinitializes them.

rule check_subscriptions {
  select when fuse subscription_check
  pre {
    vid = carvoyant:vehicle_id(); 
    my_subs = carvoyant:getSubscription(vid);
    should_have = required_subscription_list.length();
  if(my_subs.length() < should_have) then
    send_directive("not enough subscriptions") with
      my_subscriptions = my_subs and
      should_have = should_have
  fired {
    log ">>>> vehicle #{vid} needs subscription check";
    raise fuse event need_initial_carvoyant_subscriptions;
  } else {
    log ">>>> vehicle #{vid} has plenty of subscriptions";

The rule uses a pre-existing function to get the current subscriptions and compares the length of that result to the number we should have. If there aren't enough the rule fires and raises the fuse:need_initial_carvoyant_subscriptions.

This is a really simple rule. One reason is because it makes use of other services that already exist. There's already a working function for getting the current subscriptions for a vehicle. There's already another service (or rule), called initialize subscriptions, that sets up the subscriptions when they're needed.

Another reason the rule is simple is because it doesn't have to figure out which subscriptions are missing and limit itself to only initializing those. If any are missing, it asks for them all. That's because the initialize subscriptions rule is idempotent. You can run it as many times as you like without messing anything up. Of course, I'd rather not put that load on the system if I don't have to, so the check_subscriptions rule checks if something needs to be done before it raises the event.

The primary point is that the microservice architecture is loosely coupled and so setting up a service like this is easy. There's very little code, it makes use of other services, and it's unlikely to break anything. I wired it into the system by raising the event it looks for, fuse:subscription_check when the vehicle profile is updated. That seems like a good compromise between over checking and user control.

Idempotent Services and Guard Rules

Microservices are usually easier to program when responses to an event are idempotent, meaning that they can run multiple times without cumulative effect.

Many operations are idempotent (i.e. adding a ruleset to a pico over and over only results in the ruleset being added once). For operations that aren't naturally idempotent, we can make the rule idempotent using the rule's guard condition. Using a guard condition we can ensure the rule only fires when specific conditions are met.

Unfortunately, there are some functions in KRL (notably in the PCI and RSM modules) that make state changes (i.e. have persistent effect). These modules are used extensively in CloudOS. When these are used in the rule prelude, they cause side effects before the rule's guard condition is executed. This is a design flaw in KRL that I hope to rectify in a future version of the language. These functions should probably be actions rather than functions so that they only operate after the guard condition is met.

In the meantime, a guard rule offers a useful method for assuring idempotency in rules. The basic idea is to create two rules: one that tests a guard condition and one that carries out the rule's real purpose.

The guard rule:

  1. responds to the event
  2. tests a condition that ensures idempotence
  3. raises an explicit event in the postlude for which the second rule is listening

For example, in the Fuse system, we want to ensure that each owner has only one fleet. This condition may be relaxed in a future version of the Fuse system, but for now, it seems a reasonable limitation.

There are several examples in Fuse where a guard rule is used. The following is the guard rule for the Fuse initialization:

rule kickoff_new_fuse_instance {
  select when fuse need_fleet
  pre {
    fleet_channel = pds:get_item(common:namespace(),"fleet_channel");
  if(fleet_channel.isnull()) then
    send_directive("requesting new Fuse setup");
  fired {
    raise explicit event "need_new_fleet"
      with _api = "sky"
       and fleet = event:attr("fleet") || "My Fleet";
  } else {
    log ">>>>>>>>>>> Fleet channel exists: " + fleet_channel;
    log ">> not creating new fleet ";

The guard rule merely looks for a fleet channel (evidence that a fleet already exists) and only continues if the fleet channel is null.

The second rule does the real work of creating a fleet pico and initializing it.

rule create_fleet {
  select when explicit need_new_fleet
  pre {
    fleet_name = event:attr("fleet");
    pico = common:factory({"schema": "Fleet", "role": "fleet"}, meta:eci());
    fleet_channel = pico{"authChannel"};
    fleet = {"cid": fleet_channel};
    pico_id = "Owner-fleet-"+ random:uuid();
  if (pico{"authChannel"} neq "none") then
    send_directive("Fleet created") with
      cid = fleet_channel;
    // tell the fleet pico to take care of the rest of the initialization.
    event:send(fleet, "fuse", "fleet_uninitialized") with
      attrs = {"fleet_name": fleet_name,
               "owner_channel": meta:eci(),
               "schema":  "Fleet",
               "_async": 0  //complete this before we try to subscribe below
  fired {
    // put this in our own namespace so we can find it to enforce idempotency
    raise pds event new_data_available 
      with namespace = common:namespace() 
       and keyvalue = "fleet_channel"
       and value = fleet_channel
       and _api = "sky";
    // make it a "pico" in CloudOS eyes
    raise cloudos event picoAttrsSet
      with picoChannel = fleet_channel 
       and picoName = fleet_name
       and picoPhoto = common:fleet_photo 
       and picoId = pico_id
       and _api = "sky";
    // subscribe to the new fleet
    raise cloudos event "subscribe"
      with namespace = common:namespace()
       and  relationship = "Fleet-FleetOwner"
       and  channelName = pico_id
       and  targetChannel = fleet_channel
       and  _api = "sky";
    log ">>> FLEET CHANNEL <<<<";
    log "Pico created for fleet: " + pico.encode();
    raise fuse event new_fleet_initialized;
  } else {
    log "Pico NOT CREATED for fleet";

When this rule fires, an action sends an event to the newly created fleet pico that causes it to initialize and three events are raised in the postlude that cause further initialization to take place.

Fuse as a Microservice Architecture

I recently ran across the idea of microservices. I don't know why it took me so long to run across it, since they've been discussed for a few years and there are many article written about them (see the end of this post for a link list). They have, like anything new, many different definitions, but I've settled on a few characteristics that I think differentiate a microservice from other styles of architecture:

  1. Organized around a single business capability
  2. Small, generally less than 1000 lines and code and usually much smaller
  3. Event-based and asynchronous
  4. Run in their own process
  5. Independently deployable
  6. Decentralized data storage

What struck me as I reviewed various material on microservices is how much the philosophy and architectural style match what I've been preaching around persistent compute objects (picos). As I worked through the ideas, I came to the conclusion that since you can view each pico as an event bus, we can view each rule installed in the pico as a microservice.

With this lens, Fuse can be seen as an extensible, microservice architecture for connected cars.


Fuse is a connected-car platform. I've written extensively on Fuse in this blog. For the purposes of this post, it's important to understand the following:

  • Fuse uses Carvoyant to manage devices and provide an API from which we use to get vehicle data. The Carvoyant API is a well-designed RESTful API that uses OAuth for user authorization.
  • Picos use a set of pre-built services that collectively I call CloudOS to manage things like creating and destroying picos, pico-to-pico subscriptions, storing profiles, etc.
  • Rules are collected into rulesets that can share function definitions.
  • Each ruleset has a separate persistent key-value store from every other ruleset.
  • Rules are programmed in a language called KRL.
  • When we create a pico for a vehicle, the pico is automatically endowed with an event bus that connects all rules installed in the pico.
  • CloudOS provides a ruleset that functions as a persistent data store for the entire pico called the PDS. The PDS provides a standard profile for each pico. Fuse stores all of the vehicle's configuration and identity information in the pico profile.
  • Other vehicle data is stored by the individual service. For example, the trip service stores information about trips, the fuel service stores information about fuel purchase, and so on.
  • Rules can use HTTP to interface with other Web-based APIs.

Not only do we create a pico for each vehicle, but we also create one for each owner, and one per owner to represent the fleet. They are organized as shown below.

fuse microservice overall

This organization provides essential communication pathways and establishes a structural representation of ownership and control.

Example Interactions

Microservices should be designed to be tolerant of failure. Let me walk through one place that shows up in Fuse. As mentioned above, Carvoyant provides an API that allows Fuse to interact with the OBD II device and it's data. To do that, we have to mirror the vehicle, to some extent, in the Carvoyant API. Whenever we create a vehicle in Fuse, we have to create one in Carvoyant. But there's a small problem: before the vehicle can be created at Carvoyant, the owner needs a Carvoyant account and that account needs to be linked (via OAuth) to their Fuse account.

One way to solve this problem would be to ensure that a vehicle can't be added to Fuse unless a Carvoyant account is active. Another way is to be more tolerant and let users add vehicles any time, even before they've created their Carvoyant account, and sync the two accounts pro actively. This has the added benefit of working the other direction as well. If someone happens to already have a Carvoyant account, they can add their vehicle to Fuse and the two accounts will sync up.

The following diagram shows a few of the microservices (rules) in the vehicle pico that help perform this task.

fuse microservice

(click to enlarge)

As I mentioned above, the vehicle configuration is stored in the PDS profile. Whenever the profile is updated, the PDS raises a pds:profile_updated event. So anytime the user or a process changes any data in the vehicle profile, this event will be raised.

Any number of rules might be listening to that event. One rule that listens for that event is carvoyant initialize vehicle (lower left of the preceding diagram). Carvoyant initialize vehicle is a fairly complicated service that ensures that any vehicles in Fuse and Carvoyant are represented, if possible, in the other service. We'll examine it in more detail below. When it's done, carvoyant initialize vehicle raises the fuse:vehicle_uninitialized if the vehicle is changed.

Also, when the carvoyant initialize vehicle rule contacts Carvoyant to either update or create a vehicle, it does so with an http:post(). Picos are designed to automatically raise an event with the results of the post when it completes. The initialization OK rule (lower center) listens for that event (only with an HTTP 200 status) and store various information about the vehicle at Carvoyant. This is what establishes a link between the vehicle pico and the vehicle at Carvoyant.

Independently, the initialize vehicle rule (upper left in the diagram) is listening for the fuse:vehicle_uninitialized event. Initialize vehicle primarily functions to raise other events and thus sets off a collection of activities that are necessary to bring an uninitialized vehicle into the system and make it function. Since it's now been connected to Carvoyant, the vehicle needs to

  1. subscribe to events from the Carvoyant system including events that communicate the ignition status, fuel level, battery level, and any diagnostic codes,
  2. retrieve the current status of various vehicle systems, and
  3. retrieve specific data about the vehicle's geoposition.

The initialize vehicle rule doesn't ask for all these on it's own, it merely signals that they're needed by raising events. Other services respond by carrying out one simple task.

For example, the update vehicle data rule in the upper right makes calls to the Carvoyant API to gather data from the vehicle, communicates that data to the fleet pico (i.e., raises an external event from the standpoint of the vehicle), and raises various events including the pds:new_data_available event.

The PDS add item rule (lower right corner) is listening for the pds:new_data_available event. It takes the data in the event and stores it in the vehicle PDS. We chose to store the vehicle data in the PDS instead of the update vehicle data rule because it's widely used and is more widely available in the PDS.

Of course, these are only a few of the rules that make up the entire system. There are at least 60 in the vehicle pico and many more in the fleet. Most of these rules are quite simple, but still perform a vital task.

One important note: each vehicle is represented by a unique pico and this has it's own copy of each of these rules, running against the unique data for that vehicle. The rule doesn't know or care about any vehicles except the one that it's installed in. As such, it is a dedicated service for that specific vehicle. In a system with 10,000 vehicles, there will be 10,000 independent carvoyant initialize vehicle microservices running, listening for pds:profile_update events from the PDS in the pico in which it runs.

A Detailed Look at A Service Rule

The carvoyant initialize vehicle rule performs initialization, but it has to be flexible. There are three basic scenarios:

  1. The Fuse vehicle pico has a representation in Carvoyant and thus any changes that have been made in Fuse merely need to be reflected back to Carvoyant.
  2. The Fuse vehicle pico has no Carvoyant representation but there is a Carvoyant vehicle that matches the Fuse vehicle in specific attributes (e.g. the vehicle identification number) and thus should be linked.
  3. The Fuse vehicle pico has no Carvoyant representation and nothing matches, so we should create a vehicle at Carvoyant to represent it.

Here's the code for the rule

rule carvoyant_init_vehicle {

  select when carvoyant init_vehicle
	   or pds profile_updated

   pre {
    cv_vehicles = carvoyantVehicleData();
    profile = pds:get_all_me();
    vehicle_match = cv_vehicles
			       v{"vin"} eq profile{"vin"}  

    existing_vid = ent:vehicle_data{"vehicleId"} || profile{"deviceId"};

    // true if vehicle exists in Carvoyant with same vin and not yet linked
    should_link = not vehicle_match.isnull() 
	       && ent:vehicle_data{"vehicleId"}.isnull();

    vid = should_link                            => vehicle_match{"vehicleId"} 
	| ent:vehicle_data{"vehicleId"}.isnull() => "" 
	|                                           existing_vid;

    config_data = get_config(vid); 
    params = {
	"name": event:attr("name") || profile{"myProfileName"} || "Unknown Vehicle",
	"deviceId": event:attr("deviceId") || profile{"deviceId"} || "unknown",
	"label": event:attr("label") || profile{"myProfileName"} || "My Vehicle",
	"vin": event:attr("vin") || profile{"vin"} || "unknown",
	"mileage": event:attr("mileage") || profile{"mileage"} || "10"

  if( params{"deviceId"} neq "unknown"
   && params{"vin"} neq "unknown"
    ) then
	with ar_label = "vehicle_init";

  fired { 
    raise fuse event vehicle_uninitialized 
      if should_link || event:name() eq "init_vehicle";
    log(">> initializing Carvoyant account with device ID = " + params{"deviceId"});
  } else {
    log(">> Carvoyant account initializaiton failed; missing device ID");

This code is fairly dense, but straightforward. There are four primary pieces to pay attention to:

  1. The select statement at the beginning declaratively describes the events that this rule listens for: carvoyant:init_vehicle or pds:profile_updated
  2. The pre block (or prelude) retrieves the current list of vehicles Carvoyant knows about and filters them to see if any of them match the current vehicle. Then it calculates the vehicle ID (vid) based on that information. The calculated value of vid corresponds to the three scenarios listed above. The code also computes the parameters to post to Carvoyant.
  3. The action block is guarded by a condition that ensures that the deviceId and vin aren't empty. Assuming the guard condition is met, the rule POSTs to carvoyant (the action carvoyant_post has all the logic for making the POST with appropriate OAuth access tokens).
  4. Finally, if the rule fired (i.e. it was selected and it's guard condition was true) the postlude raises the fuse:vehicle_uninitialized event. The raise statement is further guarded to ensure that we don't signal that vehicle is uninitialized event when updating an existing vehicle.


There are a few points I'd like to make about the preceding discussion and example:

Service contracts must be explicit and honored. Because we lean heavily on independent services, knowing which services raise which events and what those events mean is critical. There have been times that I've built services that didn't quite connect because I had two events that meant the same thing and each service was looking for a different event.

Self-healing allows misses. Picos don't guarantee evnt delivery. So consider what happens if, for example, the initialization OK rule misses the http:post event and the Carvoyant vehicle information fails to get stored? The carvoyant initialize vehicle rule is tolerant of failure in that the system will merely find the vehicle again later and link it then.

Idempotency is important to loose coupling. There are multiple places where it's easier to just raise an event and rely on the fact that running a service again does no harm. The logic gets very complicated and tightly coupled when one service has to determine if it should be raising an event because another service isn't designed idempotently.

Scaling by adding services. The carvoyant initialize vehicle rule ensures that Fuse vehicles are linked to their representation at Carvoyant, but it doesn't sync historical data, such as trips. What if something gets disconnected by a bad OAuth token? Because of the microservice architecture, we can add other rules later that look at historic data in the Carvoyant system and sync the Fuse vehicle with that data. These new services can be added as new needs arise without significant changes to existing services

Picos and CloudOS provide infrastructure for creating microservices in the Internet of Things. Picos provide a closure around the services and data for a given entity. In the case of Fuse, a vehicle. By representing an entity and allowing multiple copies to be created for like entities, picos provide persistenly available data and services things whether or not they're online.

Asynchronous, one-way communication is the rule. Note that two-way communication is limited in the preceding example. Rules are not waiting for returned results from other rules. Neither is it right to think of rules "calling" each other. While the KRL system does make some guarantees about execution order, much of the execution proceeds as dictated by the event-flow, which can be conditional and change with circumstances.

Picos cleanly separate data. Picos, representing a specific entity, and microservices, representing a specific business capability within the pico, provide real benefit to the Internet of Things. For example, if you sell you car, you can transfer the vehicle pico to the new owner, after deleting the trip service, and its associated data, leaving untouched the maintenance records, which are stored in the maintenance service.

A microservice architecture means picos are extensible. Microservices provide an easily extensible API, feature set, and data model for the pico. By changing the mix of services in a given pico, the API, functionality, and data model of that pico change commensurately. There is nothing that requires that every vehicle pico be the same. They can be customized by make, model, owner, or any other characteristic.


Viewing rules as microservics within a pico have given me a powerful way to view pico programming. As I've folded this way of thinking into my programming, I've found that many of the ideas and patterns being developed for microservices can equally be applied to KRL programming within a pico.

Further Reading

For more background on microservices, here are some resources:

Fuse as a Model of the Vehicle Ecosystem

Fuse is a connected-car product. But it's more than that. Fuse is also a system for modeling the relationships that a car has with people, organizations, places, things, and so on. Because of their utility, expense, longevity, and mobility, cars have numerous, significant relationships.

A relationship is more than an identifier. At the least, a relation implies means of communication that leads to interaction. Relationships are build on mutual exchange of value (not necessarily monetary).

Among the most important relationships that a car has is with its owner. But there's more than one owner. At the beginning of it's life, the car's owner is the manufacturer. Later the car is owned by the dealer, and then by a person or finance company. And, of course, cars are frequently resold. Over the course of it's lifetime a car will have many owners.

The nature of relationships change over time. For example, the car probably needs to maintain a relationship with the manufacturer and dealer after they are no longer owners. With these changes to the relationship come changes in rights and responsibilities.

In addition to relationships with owners, cars also have relationships with other players in the vehicle ecosystem including: fuel vendors, mechanics, parts vedors, insurance companies, finance companies, and government agencies. Vehicles exchange data and money with these players over time.

In addition to the owner, the car has relationships with other people: drivers, passengers, and pedestrians.

And the car might have relationships with other vehicles, traffic signals, the roadway, and even potholes.

Each of these relationships is based on unique identities for the various players. Each of the relationships are based on communication channels that need to be authenticated, authorized, and controlled by policy. Each has different attributes, different rights, different responsibilities, different needs, and different frequencies of use. Some are permanent or semi-permanent. Others are temporary or even transient. For example, a car will have a (semi-)permanent relationship with its manufacturer and owner, a temporary relationship with the service shop changing the oil, and a transitory relationship with the traffic signal it's sitting at and other vehicles around it on the roadway.

Built correctly, these relationships form a model of the entire vehicle ecosystem. Relationships link the models for individual cars, the people who interact with them, roads, and so on to provide a vast, distributed online system that mirrors the vehicle ecosystem. Such a collection of linked, active models could provide a system for understanding traffic and the economic and social structures around car ownership and use.

Fuse builds these models using something we call a pico, short for "persistent compute object." Picos allow things to exist online as long-lived entities with a unique identity and purpose. Picos allow any thing, whether it's active, like the car, or passive, like the road or a pothole, to have an online representation that supports persistently storing data, running programs, and communicating with other things. Picos allow things like your car to support relationships and form networks that model their features, functionality, and utility in the physical world.

The real power of the Internet of Things is the relationships between them. What's missing in the CompuServe of Things are the relationships. Cars show this clearly because of their rich connections to other players in the vehicle ecosystem. Fuse brings the relationship network in that ecosystem online and makes it part of the Internet.

Credits: Kevin Cox outlined some of the benefits of connected cars in the project VRM mailing list a few days ago. I've used some of them above.

I delivered this post as a talk at IRM Summit 2014. Slides on Slideshare.

Building a Universal Silo

Silos de Trigueros

In a recent discussion about silos and that lack of "open" in "open APIs" that followed from Aral Balkan's superb How Web 2.0 killed the Internet, there was talk of a "universal silo." Let's consider how we might build such a universal silo.

One approach uses a centralized silo, like Facebook or Google, but presumably controlled by some benevolent authority. People usually resort to a centralized approach to solve problems cause we are just a lot more comfortable with it. We believe that if we can just make the rules right, then we’ll all be OK. All command-control systems are founded on this belief. It’s not always bad, but it comes at a huge cost: the loss of personal autonomy. (As an aside, the second step in this path is to find an organization with a monopoly on violence [i.e. a government] to enforce the rules for you.)

The other approach, the one the Internet taught us, is to use a decentralized system to accomplish the goal. This is much harder for humans to wrap their minds around and a lot less satisfying cause it requires surrendering authority and the ability to control the results. Humans so love to control outcomes.

We could build a “universal silo” using either approach. In the centralized approach, I think we’d end up with the UN/ITU fiasco in control (or something equally as heinous). In the decentralized approach we’d get the Internet.

Yeah, the Internet is the one big silo we’re after. It’s not perfect. In particular, we need to weed out some of the centralization that has crept in (e.g. DNS, Root Certificate Authorities). But it’s the one big silo we all can be a part of without everyone subjecting themselves to a single administrative authority.

In Doc’s piece on End User License Agreements, he says the Interent is just “A”, an agreement. There are no end users, there’s no licensing. Just agreements. This is the universal silo we can all live with.

Bonus link: Read Ben Werdmüller's How we're on the verge of an amazing new open web #indieweb for a positive spin on all this.

The CompuServe of Things


On the Net today we face a choice between freedom and captivity, independence and dependence.

You may view that statement as melodramatic, but the near future will incorporate computers into more facets of our lives than we can imagine. If we are to trust those computers and avoid giving up autonomy to centralized authorities, we have to create an open Internet of Things. I don’t think it’s going too far to say that our natural rights as human beings are based on a world that is heterarchical by nature—and that we are fooling ourselves if we think we can maintain those rights using only hierarchies and centralized systems. Building the CompuServe of Things instead of a true Internet of Things is a real threat to personal freedom and autonomy, and will halt progress for decades to come, unless we do the right thing now.

Online Services

Back in the day, some of us were lucky enough to be at a university and use the Internet. I started using the Internet in 1986 when I entered graduate school at UC Davis. If you weren't one of the chosen few with an Internet connection and wanted to communicate with friends, you used CompuServe, Prodigy, AOL or some other "online service." Each of these offered a way to send email, but only to people on the same service. They had forums where you could discuss various topics. And they all had what we'd call "apps" today. Sounds kind of like Facebook, actually. These services were silos. Each was an island that didn't interoperate with the others.

In the mid-90's, interest in the Web caused a number of companies to get into the dial-up Internet service business. Once connected to the Internet, you could email anyone, participate in forums anywhere, look at any Web site, shop from any store, and so on. AOL successfully made the transition from online service business to ISP, the rest did not.

Online 2.0: Return of the Silos

Each of these online service businesses sought to offer a complete soup-to-nuts experience and capitalized on their captive audiences in order to get businesses to pay for access. In fact, you don't have to look very hard to see that much of what's popular on the Internet today looks a lot like sophisticated versions of these online service businesses. Web 2.0 isn't so much about the Web as it is about recreating the online business models of the 80's and early 90's. Maybe we should call it Online 2.0 instead.

To understand the difference, consider GMail vs. Facebook Messaging. Because GMail is really just a massive Web-client on top of Internet mail protocols like SMTP, IMAP, and POP, you can use your GMail account to send email to any account on any email system on the Internet. And, if you decide you don't like GMail, you can switch to another email provider (at least if you have your own domain).

Facebook messaging, on the other hand, can only be used to talk to other Facebook users inside Facebook. Not only that, but I only get to use the clients that Facebook chooses for me. Facebook is going to make those choices based on what's best for Facebook. And most Web 2.0 business models ensure that the interests of Web 2.0 companies are not necessarily aligned with those of their users. Decisions to be non-interoperable aren't done out of ignorance, but on purpose. For example, WhatsApp uses an open protocol (XMPP), but chooses to be a silo.

Note: I'm not making a "Google good, Facebook bad" argument. I'm merely comparing GMail to Facebook messaging. Google has its own forms of lock-in in many of its products and is every bit as much a re-creation of the 1980's "online service" business model as Facebook.

Which brings us to the Internet of Things. The Internet of Things envisioned today isn’t a real Internet. It’s a forest of silos, built by well-meaning companies repeating the errors of history, giving us the modern equivalents of isolated mainframes, non-compatible LANs and incompatible networks like those of AOL, Compuserve and Prodigy. What we're building ought to be called the CompuServe of Things.

A Real, Open Internet of Things

If we were really building the Internet of Things, with all that that term implies, there'd be open, decentralized, heterarchical systems at its core, just like the Internet itself. There aren't. Sure, we're using TCP/IP and HTTP, but we're doing it in a way that is closed, centralized, and hierarchical with only a minimal nod to interoperability using APIs.

We need the Internet of Things to be the next step in the series that began with the general purpose PC and continued with the Internet and general purpose protocols—systems that support personal autonomy and choice. The coming Internet of Things envisions computing devices that will intermediate every aspect of our lives. I strongly believe that this will only provide the envisioned benefits or even be tolerable if we build an Internet of Things rather than a CompuServe of Things.

When we say the Internet is "open," we're using that as a key word for the three key concepts that underlie the Internet:

  1. Decentralization
  2. Heterarchy (what some call peer-to-peer connectivity)
  3. Interoperability

You might be thinking, aren't decentralization and heterarchy more or less the same? No. To see how they differ, consider two examples: DNS, the domain name service, and Facebook. DNS is decentralized, but hierarchical. Zone administrators update their zone files and determine in a completely decentralized manner which sub domains inside their domain correspond to which IP addresses (among other things). But the way DNS achieves global consensus about what these mappings mean is hierarchical. A few well-known servers for each top-level domain (TLD) point to the servers for the various domains inside the TLD, which in turn point to servers for sub domains inside them, and so on. There's exactly one, hierarchical copy of the mapping.

Facebook, on the other hand, is heterarchical, but centralized. The Facebook Open Graph relates people to each other in a heterarchical fashion—peer-to-peer. But of course, it's completely centralized. The entire graph resides on Facebook's servers under Facebook's control.

Interoperability allows independently developed systems to interact. Interoperability provides for substitutability, allowing one system or service to be substituted for another without loss of basic functionality. As noted above, even though I use GMail as my email provider, I can talk to people who use Hotmail (i.e. they're interoperable) and I can, if I'm unhappy with GMail, substitute another email provider.

Decentralization, heterarchy, and interoperability are supported by protocol, the standards that govern interaction. One of the ironies of open systems like the Internet is that rules are more important than in closed systems. In a closed system, the hierarchical, centralized authority imposes standards that create order. In an open, decentralized, heterarchical system, the order must be agreed to ahead of time in the form of protocol.

These three concepts aren't optional. We won’t get the real Internet of Things unless we develop open systems that support decentralization, heterarchy, and interoperability. We might well ask "where are the protocols underlying the Internet of Things?" TCP/IP, HTTP, MQTT, etc. aren't enough because they work at a level below where the things will need to interoperate. Put another way, they leave unspecified many important processes (like discovery).

Personal Autonomy and Freedom

My point isn't a narrow technical one. I'm not arguing for an open Internet of Things because of perceived technical benefits. Rather, this is about personal autonomy and ultimately human rights. As I said above, the Internet of Things will put computers with connectivity into everything. And I really mean "every thing." They will intermediate every aspect of our lives. Our autonomy and freedom as humans depend on how we build the Internet of Things. Unless we put these connected things under the control of the individuals they serve without an intervening administrative authority, we will end up building something that undermines the quality of life it's meant to bolster.

What is an "intervening administrative authority?" Take your Fitbit as an example. You pay $99 for the device, but cannot use it without also creating an account at Fitbit and having all the data from the device flow through Fitbit's servers. In this case, Fitbit is the "intervening administrative authority." Whenever you create an account at Fitbit or anywhere else, you're being "administered" and giving up some amount of control. That's not necessarily a bad thing, but it does, taken in aggregate, place real and significant restrictions on personal autonomy.

If Fitbit decides to revoke my account, I will probably survive. But what if, in some future world, the root certificate authority of the identity documents I use for banking, shopping, travel, and a host of other things decides to revoke my identity for some reason? Or if my car stops running because Ford shuts off my account? People must have autonomy and be in control of the connected things in their life. There will be systems and services provided by others and they will, of necessity, be administered. But those administering authorities need not have control of people and their lives. We know how to solve this problem. Interoperability takes "intervening" out of "administrative authority."

The only way we get an open Internet of Things is to build it. That means we have to do the hard work of figuring out the protocols—and business models—that support it. I'm heartened by developments like Bitcoin's blockchain algorithm, the #indieweb movement, Telehash, XDI Discovery, MaidSafe, and others. And, of course, I've got my own work on KRL, CloudOS, and Fuse. But there is still much to do.

We are at a crossroads, with a decision to make about what kind of future we want. We can build the world we want to live in or we can do what's easy, and profitable, in the short run. The choice is ours.

Update: After posting this, I found that Adam McEwen used the term "CompuServe of Things" in a talk he gave in 2013: Risking a Compuserve of Things

Fuse is a Telemetrics Platform for Your Car: Trips on Your Calendar

fuse trio

When I describe Fuse to people, I often say it's three things:

  1. A device that plugs into the OBD II port on your car, has a GPS and cellular connection, and constantly stream data from your car.
  2. A mobile app for interacting with the data.
  3. A personal cloud platform, under the car owner's control, where the data is stored and processed.

I talk about them in this order because I usually go on to emphasize that what's really important here is the personal cloud. Put another way, Fuse isn't a OBD II device and an app. Rather, Fuse is a telemetrics platform for your car. More importantly, it's your telemetrics platform.

In Fuse, the vehicle sends data to its micro-cloud (what I call a "pico") whenever something changes (the fuel is low, the battery is low, there's a diagnostic code, the ignition turned off, and so on). The pico stores and organizes that data. Other things, like an app or a Web site allow the owner to interact with that data.


Yesterday I had an idea that shows the power of viewing Fuse as vehicle telemetrics system instead of a mere connected-car app. One of the things that makes data useful is putting it in context. It occurred to me that the right context for the trip data from my car is my calendar.

To show you what I mean, here's a screenshot of my calendar with the trips I made Tuesday. Note that there's a trip of 7.2 miles from 9:43am to 9:58am. The context, of course, is the Fuse team meeting I had at the UVU BRC right after it (in blue). I can see at a glance that this trip was connected to the Fuse meeting at 10am. The interleaving of my trips with my appointments makes the reason for the trip clear. (I've put a red box around the two appointments I'm talking about so you can see them in the small image. Click thru to see the full size calendar.)

Trips on my calendar

Another thing that's clear from putting my trips on a calendar is that I can see how much of my day was spent in the car. This was a particularly heavy day since I went to the airport to pick up my brother and it shows.

As an added feature, the appointment includes a URL to the Google Map for the trip. Here's the map for that trip to UVU:

Trip from my calendar

This works using a iCalendar feed from my truck's pico (the micro-cloud that is storing the data). I installed a function in the pico that uses the stored trips to generate an iCalendar feed on demand. I simply subscribe to the URL for that function from my calendar. My calendar updates the appointments as I drive my truck. For example, I just got back from picking up my daughter at school and the trip's sitting there on my calendar. This illustrates why I believe calendars will be one of the key UI components of the Internet of Things.

We can modify the flow I showed earlier to take this new element into account:


The data is now used by my calendar in addition to the app. And of course, there's no limit to the ways we can use the vehicle's data. Picos are programmable and each one can be customized as its owner requires—just like a PC in the good old days. As a result, they create a flexible substrate for Fuse. Using that programmability, I was able to create a iCalendar subscription from my Fuse data in an afternoon. This isn't just possible for me because I'm building the Fuse API; this power is available to anyone. One of the things that makes Fuse unique is that not only does it have an API, but that API is extensible by anyone willing to learn how picos work and program a little KRL.

The Technical Details

If you're curious about the KRL that creates the iCalendar feed, here's the actual function that does the job:

ical_for_vehicle = function(search){
  num_trips = 25; // return last 25 trips
  sort_opt = {
    "path" : ["endTime"],
    "reverse": true,
    "compare" : "datetime"
  sorted_keys = this2that:transform(ent:trip_summaries, sort_opt)
  t = sorted_keys
      .map(function(k) {
         e = ent:trip_summaries{k};
	 start = waypointToArray(e{"startWaypoint"}).join(",");
	 dest = waypointToArray(e{"endWaypoint"}).join(",");
	 miles = e{"mileage"} || "unknown";
	 url = "{start}&daddr=#{dest}";
         {"dtstart" : e{"startTime"},
	  "dtend" : e{"endTime"},
	  "summary" : "Trip of #{miles} miles",
	  "url": url,
	  "description": "Trip ID: " + e{"id"},
	  "uid": "" + $e{"id"}  
  vdata = vehicle:vehicleSummary();
  meta_data = {"name": vdata{"label"}, 
               "desc": "Calendar of trips for " + vdata{"label"}}
  ical:from_array(t, meta_data);

As KRL functions go, this one's pretty complicated, but it's basically creating a sorted list of keys from the trip data (sorted_keys) and then using those keys (k) in a map() to access trips (ent:trip_summaries) and create a JSON version of an iCalendar entry (t) that we feed to ical:from_array() to generate the actual iCalendar data.

Because of how the Sky Cloud meta-API works in a pico, I'm able to expose this function via a URL and that becomes the iCalendar subscription URL.

On Names and Heterarchy

Names not to be forgotten

When I first started using Unix, DNS was not widely used. Instead we FTP'd hosts files from a computer at Berkeley, merged it with a local hosts file, and installed it in /etc. Mail addresses had ! in them to specify explicit internal routing from a well-known host to the local machine. We had machine names, but no global system for dereferencing them.

DNS changed all that by providing a decentralized naming service that based lookup on a hierarchy starting with a set of well-known machines servicing a top-level domain (TLD), like .com. Nothing was more important than a great domain name with a .com at the end during the 90's. URLs, or Uniform Resource Locators, a global naming system for web pages, was based on DNS, so having a short, memorable domain name was, and still is, an asset.

Of course the good domain names were quickly all gone. I was lucky enough to own a few good ones over the years:,,,, and I was also early enough to get my name, If you're just coming to this party, however, your name is long gone unless you want to use a TLD that no one has heard of and won't recognize. Anyone in the .pe namespace?


Names are used to refer to things. Without names, we'd constantly be describing people, places, and things to each other whenever we wanted to talk about them. You do that now when you can't remember someone's name: "You, know, the guy who was in the green shirt, with the beard, walking a dog?" Any given entity can have multiple names that all refer to the same thing. I'm Phil, Phillip, Phil Windley, Dad, and so on depending on the circumstance.

In computing, we use names for similar reasons. We want to easily refer to things like memory locations (variables), inodes (file names), IP addressed (domain names), and so on. Names usually possess several important properties, including:

  • Names should be unique within some specific namespace
  • Names should be memorable
  • Names should be short enough to type into computing devices by humans

As Crosbie Fitch points out in his excellent treatise on identity, names don't need to be globally unique, just unique enough. Names are identifiers we put on things that already have an identity. Names aren't the same thing as identity.

Do We Need Names?

Do we need names? At first blush everyone says "yes," but when you dig deeper there are lots of systems where we don't really need names at least not in the form of direct mapping between names and addresses.

The best example is the Web itself. URLs aren't names. They're addresses. While they are globally unique, they aren't memorable and most people hate typing them into things. If I'm looking for IBM, I'm happy to type into my browser. But if I'm looking for a technical report by IBM from 2006? Even if I know the URL, I'm not likely to type it in, instead, I'll just search for it using a few key words. Most of the time that works so well that we're surprised when it doesn't.

There are several alternatives to globally unique names.


When we type keywords into a search engine we're using an alternative to names: discovery.

The World Wide Web solved several important problems but discovery wasn't one of them. As a result, Aliweb, Yahoo!, and a host of other companies or projects sprung up to solve the discovery problem. Ultimately Google won the search engine wars of the late 90s. People have argued that search and discovery are natural monopolies. Maybe. But there are heterarchical methods of finding things.

When I mention this to people, I often get asked "what do you have against Google?" Nothing specifically against Google. But I think the model of centralized discovery, mail, communication, and friendship has significant drawbacks. The most obvious one is the problem of having a single point of failure. All of these products and companies will eventually go away, whether your done using them or not.

A larger problem is censorship. Notice that while many despotic regimes will try to shut down Twitter or some other centralized service from time to time, they have a much tougher time restricting access to and use of the larger Web and more so the Internet (yeah, there's a difference despite the media's confusion).

Larger still is the privacy question. Twitter, Facebook, Google, and their ilk are the stuff of dreams for tyrants, bullies, corporate spies, and other who wish you harm. But it's more insidious than that. The issue of online privacy isn't limited to conspiracy theories about some hypothetical threat. The real threat to our privacy isn't the NSA, it's the retailers and others who want to sell you stuff. They employ centralized systems like Facebook and Google every hour of every day to use your personal information against you. They'd claim they're using your data to help you. Ask yourself what percentage of all the ads you see in a week you consider helpful.

Personal Directories and Introductions

Discovery isn't the only way to get around a lack of names. To see how, think about your house address. It's a long unwieldy string of digits and letters. Resolving a person's name to their address has no global solution. That is, there's no global directory (except maybe at Acxiom or the NSA) that maps names to addresses. Even the Post Office in over 200 years of existence hasn't thought "Hey! We need to create a global directory of names and addresses!" Or if they have, it didn't succeed.

So how do we get around ? We exchange addresses with people and keep our own directories. We avoid security issues by exchanging or verifying addresses out of band. For the most part, this is good enough.

Personal directories are largely how people exchange bitcoins and other cryptocurrencies. I give you my bitcoin address in a separate channel (e.g. email, my web site, etc.). You store it in a personal directory on your own system. When you want to send me money, you put my bitcoin address in your wallet. To make it even more interesting, since bitcoin addresses are just public-private key pairs, I can generate a new one for every person creating what amount to personal, peer-to-peer channels for exchanging money.

When we built Forever, we relied on people using email for introducing their personal clouds to another one. This introduction ceremony provided a convenient way to exchange the long addresses of the personal cloud and stored them away for future use.

So long as there is some trusted way to communicate with the party you're connecting to, long addresses aren't as big a problem as you might think. We only need to resort to names and discovery when we don't have a trusted channel.

Heterachical Naming Systems

The problem with personal directories is that they make global look up difficult. Unless I have some pre-existing relationship with you or a friend who'll do an introduction, a personal directory does me little good. One way to solve this problem is with systems that work like DNS, but are heterarchical.

I've recently been playing with a few interesting naming systems based on bitcoin. Whatever you may think of bitcoin as a currency, there is little doubt that bitcoin presents a working example of a global distributed consensus system.

Distributed consensus is the foundational feature of a heterarchical naming system. To understand why, think about DNS. DNS distributes the responsibility of assigning names, but it avoids the problem of consensus (agreeing on what names stand for what IP addresses) by creating single copy of of the mapping. This single copy presents a single point of failure and a convenient means of censoring or even changing portions of the map.

If we want to distribute the copy of the mapping and make everyone responsible for maintaining their own mapping between names and addresses, we need a distributed consensus system. Bitcoin provides exactly such a system in the form of a block chain, a cryptographic data structure with a functional means of validating updates. and Namecoin are examples of systems than use the block chain to map names to addresses in a heterarchical fashion. I have registered windley.bit using Namecoin. If you type it in your browser it won't resolve since your operating system only knows how to resolve names via DNS, but that's not a fundamental limitation, you can patch your OS to resolve names using alternative mappings like Namecoin. Your OS didn't understand TCP/IP either in the distant past. I used to regularly patch Windows 3.1 by adding a TCP/IP stack. Windows 95 included it due to popular demand. Right now, I'm using a browser plugin from FreeSpeechMe to resolve .bit domains for me.

What's the advantage of windley.bit over Simply that the mapping is completely distributed. There is no single point of failure. You can turn off all the TLD servers and windley.bit will still work. One of the key provisions of the Stop Online Piracy Act (fortunately dead for now), would have used DNS to censor Web sites deemed to be infringing. Heterarchical directories would be immune from such silliness.

Aside: Namecoin is actually a general purpose distributed key-value store. So, domain names are just one thing you can do with it.


I'm very excited about heterarchical technologies coming into play. I believe the near future will incorporate computers into more facets of our lives than we can even imagine. If we're to trust those computers and avoid giving up autonomy to centralized authorities, heterarchical structures will be fundamental. I don't think it's going to far to say that our natural rights as human beings are based on a world that is heterarchical (at the global level) and that we are fooling ourselves if we believe we can engineer virtual systems that respect or protect those rights using hierarchies and centralized authorities.

Bonus link: Adriana Lukas has an excellent talk at TEDxKoeln on heterarchies and key principles.