Fuse with Two Owners

If you've followed my posts on the Fuse Architecture, you'll likely recognize this picture:

fuse microservice overall

The architecture uses a pico to represent each vehicle, as well as one for the owner and one for the fleet. This organization provides essential communication pathways and establishes a structural representation of ownership and control.

I've also shown architectures where there was more than one owner, not to mention having relationships with everything from manufacturers, service vendors, and potholes.

With all that promise, however, I built Fuse with support for just one owner. That was a reasonable simplifying assumption when I was just getting started, but last week, with some prodding from Jim Pasquale, I decided I'd explore what it would take to support multiple owners, like so:

Fuse with multiple owners

Introductions

The first step was to create the services inside the owner and fleet picos necessary to support introductions and subscriptions. Each pico subscription is unique. Each owner has a separate channel to the fleet and those channels have unique names.

I determined that the best way to support this was to re-use existing inter-pico subscription functionality as much as possible since that have been proven to work reliably over several years. The introduction process goes something like this:

  1. The prospective owner asks the current owner for an introduction to the fleet
  2. The current owner asks the fleet for a new channel and name to give to the prospective owner
  3. The prospective owner uses the new channel and name to subscribe to the fleet

The diagram below shows how this works. Green interactions depend on existing CloudOS services. Blue arrows are new interactions I wrote specifically to support introductions. I didn't implement the black interactions for this experiment, but they'd be needed to roll this out for Fuse generally.

introduction and subscription in picos

The fleet will only give up introductions to a pico with which it has an existing FleetOwner relationship. The fleet will only honor the subscription request if it is one that it has created and has the right name and channel identifier (shared key).

The overall experience for Fuse owners could mirror something like the Forever experiment we did several years ago. We'd likely build it into the Fuse Management Console (FMC).

The introduction pattern shows up in multiple place, albeit with different channel relationships (beside FleetOwner). For example, a similar process could be used in transferring ownership of a vehicle between two fleets. The primary difference would be that the fleet would drop the subscription to the old fleet once the transfer was complete.

Using Backchannels

I knew that giving a fleet two owners would cause some problems. After all, none of my testing—not even my mindset while programming—took two owners into account. Initial testing showed that both owners of the fleet could log into FMC and manipulate the fleet. So far so good.

The first place a problem showed up was weekly reports. Fuse sends each owner a weekly report. This can be disabled in the preferences. When enabled, the owner's pico has a scheduled event that fires once a week to send the report. The owner doesn't know how to generate a report, so it tells the fleet to generate a report. The fleet pico generates the report asynchronously.

The fleet, however, doesn't know how to email the owner. So, the fleet pico routes the completed report back to the owner for mailing. That's where things went wrong. Here the rule that routes events back to the owner when needed:n

rule route_to_owner {
  select when fuse email_for_owner ...
  pre {
    owner = CloudOS:subscriptionList(common:namespace(),"FleetOwner")
              .head()
              .pick("$.eventChannel");
  }
  event:send({"cid": owner}, "fuse", event:type())
     with attrs = event:attrs();
}

The problem is the head() operator. It simply returns the first member of the subscription list.

You can imagine what happened. When reports got sent out, one owner (which ever one happened to be first in the subscription list) received two reports and the other owner received none. This was easily remedied by having the rule find the owner channel based on who asked for the report in the first place (determined by the incoming channel as returned bymeta:eci()):

rule route_to_owner {
  select when fuse email_for_owner ...
  pre {
    owner_subs = CloudOS:subscriptionList(common:namespace(),"FleetOwner");
    matching_owner = owner_subs
                       .filter(function(sub){ 
                                 sub{"backChannel"} eq meta:eci() 
                               });
    owner = matching_owner.head().pick("$.eventChannel");
  }
  event:send({"cid": owner}, "fuse", event:type())
    with attrs = event:attrs();
}

This solves the problem by using the backchannel to find who sent the initial event. Other times, it might be best to lookup the owner by pico name or other data. This is a good pattern for event routing in general: don't depend on stored or specific channels for communicating with other picos, look them up instead. In general this is a good idea since event channel identifiers can change.

I'm sure I'll find a few more problems as we play with this some more, but it was nice to know that the existing subscription process could form a large part of the introduction process and that finding the right owner involved only minor changes to look up, rather than rely on specific, channels. One of the things I like about modeling with picos is that having unique, persistent objects to represent real-world entities and concepts (like owners, fleets, and vehicles) makes handling concepts like multiple owners straightforward.


Self Sovereign Authorities and the Epic Struggle for IoT

I finished reading three things this week that all tied together in my mind and so I wanted to mention them. One is a full-length book, but the other two are short essays.

Jefferson and Hamilton: The Rivalry That Forged a Nation by John Ferling—I'm a fan of early American history and have read a lot about Jefferson (including Malone's six volume biography) and some about Hamilton. What was interesting about this volume was seeing the lives of the two foes play out in parallel and focusing on their interaction. The take away for me was they were both right. A contradiction, but one that is still played out daily in US politics. We need national strength (Hamilton) and democracy (Jefferson) even though those things are diametrically opposed and must be carefully balanced.

Hamilton is clearly the winner in the way history played out. He got his strong federal government and manufacturing-based society. Jefferson's dream for an agrarian society has faded with the 19th century. But his reason for wanting it—fear of the concentration of power and corruption (in the Lessig sense) that industry would bring—was spot on. Which brings me to the next two short essays.

The Epic Struggle of the Internet of Things by Bruce Sterling—This little essay will cost you $2.99 at Amazon, but it's worth it. Sterling lays bear the fallacy of the Internet of Things based on goods sold to us by the powers-that-be (namely the big five: Google, Apple, Facebook, Amazon, and Microsoft) and their lesser counterparts. The behemouths don't compete with each other, they seek to disrupt others in ways that are not connected with any free market that you're familiar with. We're up in arms about the NSA spying on us, but all too willing to sign up for surveillance from peddlers of connected things. Our freedom and independence as human beings is at stake, as I wrote about in The CompuServe of Things. How do we escape and regain our independence (the democracy that Jefferson fought for)? Read on.

Why Self-Sovereignty Matters by John Clippinger—This is Chapter 2 in a collection of essays edited by John H. Clippinger and David Bollier called From Bitcoin to Burning Man and Beyond: The Quest for Identity and Autonomy in a Digital Society (free PDF) John does a nice job of laying out in non-technical terms how we can be the source of our own identity (what is known as sovereign-source identity or self-sovereignity) rather than only being identified within the administrative domains of the Big Five, the Government, and other lesser administrations. This comes off as arcane, but it has a simple premise: why aren't you the source of your own identity? Why are all your identifiers given to you by others? This is the fundamental roadblock to true independence online and one that greatly interests me.

In my blog post on the CompuServe of Things, I wrote:

On the Net today we face a choice between freedom and captivity, independence and dependence.

I really believe that. I don't hate the Big Five. I love their products and use them regularly, but I want to do so as an independent entity with inalienable rights, not as a serf in their digital estates.


Events, Picos, and Microservices

I spoke at Apistrat 2014 today in Chicago on the Fuse architecture and API. The Fuse architecture is unique because it uses picos and event-query APIs to create a connected car platform. I found microservices to be a useful model for thinking about building features in the Fuse architecture. Here are my slides:


A University API

BYU Logo Big

BYU has been in the API and Web Services game for a long time. Kelly Flanagan, BYU's CIO, started promoting the idea to his team almost 10 years ago. The result? BYU has over 900 services in its Web Services registry. Some are small and some are big, but almost everything has a Web service of some kind.

Of course, this is both good news and bad news. Having services for everything is great. But a lot of them are quite tightly coupled to the underlying backend system that gives rise to them. On top of that, the same entity, say a student, will have different identifiers depending on which service you use. A developer writing an application that touches students will have to deal with multiple URL formats, identifiers, data formats, and error messages.

We're aiming to fix that by designing and implementing a University API. The idea is simple: identify the fundamental resources that make up the business of the university and design a single, consistent API around them. A facade layer will sit between the University API and the underlying systems to do the translation and deal with format, identifier, and other issues.

The name "API" reflects an important shift in how we view providing services. When you're providing a service, it's easy to fall into the trap of collecting an ad hoc mish-mash of service endpoints and thinking you're done. The "I" in API is for "interface." When you're providing an interface to the university, not just a collection of services, your mindset shifts. Specifically, in designing the University API we're aiming for something with the following properties:

  • Business-oriented—the API should be understandable to people who understand how a university works without having to understand anything about the underlying implementation. Many more people know how a university works than could ever know about the underlying implementation. An API based on resources familiar to anyone who understands a University makes the API useful even to non-programers.
  • Consistent—a developer should see a consistent pattern in URL formats, identifiers, data formats, and error messages. Consistency allows developers to anticipate how the API will work, even when they're working with a new resource.
  • Completeness—over time the University API ought to be an interface to every thing at the University that works via software (which is to say everything).
  • Obvious—using the API should be obvious to anyone who understands the general principles without needing to rely excessively on documentation.
  • Discoverable—a program should be able to discover allowed state transitions, query parameters, and so on to the extent possible.
  • Long-lived—An API is like a programming language in that it is a notation, not a technology. The goal is to create something that is not only intuitive, but stands the test of time. Designing for long-term use is more difficult than designing for short-term efficiency

The fundamental business of the university doesn't change rapidly. BYU has had students, classes, and instructors for 140 years. Likely, instructors will still be teaching classes to students in 20 years. The API to a university ought to reflect that stability. This doesn't mean it won't change, but ideally the University API will evolve over a period of decades in the same way a language does. Perl 5 is quite different and much more useful than Perl 2, for example, but it's still Perl. This gives the University API an importance that an ad hoc collection of services would be hard pressed to meet.

Building a University API has multiple advantages:

  1. First, and most obviously, making the API consistent and understandable will make it easier for developers to use it in building applications. This includes developers in the Office of Information Technology, BYU's central IT department, as well as developers in other units around campus. Further, there's no reason that students and others shouldn't be able to use the APIs, where authorized (see below), to create new services and GUIs on top of the standard university systems. The University API is the heart of a great university developer program.
  2. Beyond making it easy for developers, a consistent University API eases the pain of changing out underlying systems by introducing a layer of indirection. Once a University API is in place, underlying parts of the system can be changed out and the facade layer adjusted so that the API presented to developers doesn't change, or, more likely, only changes in response to new features.
  3. A third advantage of the University API is that it provides a single place to apply authorization policies. This is a huge advantage because it allows us to apply formal, specified policy parametrically to the API rather than doing it ad hoc. This results in more consistent and accurate data protection.
  4. Finally, a University API serves as a definition for the business of the University that befits the reality that more and more of the university's business is controlled and mediated via software systems. By designing a notation and semantics that matches what people believe the University to be, we document how the University's business is conducted.

How do you get started on such a monumental undertaking? We've created a University API team that is busy discussing, designing, and mocking-up APIs for a small set of interrelated, core resources. We hope to have mock-docs for review in the next few weeks. For now, we're focused on the following set:

/students
/instructors
/courses
/classes
/locations

Others that will eventually need to be considered include /colleges, /departments, /programs, and so on. There could easily be dozens of top-level resources in a university, but that's much more manageable than 900. And when they're logical and consistent that's especially true. For now we've identified five resources that form the core of the API and touch on activities that the university cares about most.

As you'd expect a GET on any of these resources returns a collection of all the members of that resources.

GET /students

Obviously some of these collections could be very large, so they will usually be filtered and paginated. Take /students for example. By rights, this resource should include not only all the current full-time students, but part-time students, independent study students, and so on—easily tens of thousands of records. Being able to filter this list so that it's the collection of students you want (e.g. all full-time students in the College of Engineering) will be critical.

Performing a GET on a resource with an identifier in the path returns the record for that identifier.

GET /students/:id

Again, the result could be very large. In theory, a student record contains everything the university knows about the student. In his white paper on University APIs, Kin Lane listed 11 types of data that might be in a student resource without even getting to things like transcripts, grades, applications, and so on. There are dozens of sub-resources inside a resource as complicated as /students. In practice, most programmers don't want (and aren't authorized to get) all the data about a particular student. We're attacking this problem by creating useful field sets for the most popular data for any given resource.

We're also dealing with issues such as the following:

  • What are the meta values for the API (e.g. what values are appropriate for a given field) and how should we represent them in the API?
  • How do we handle sub-resources? For example, a class has set of values for prerequisites that is a complex record in it's own right. But prerequisites doesn't deserve to be a top-level resource because it's only meaningful in the context of a course.
  • Many of the identifiers for a resource (like a class) are aggregate identifiers. In the case of a course the identifier is made from the term, department, course number, and section.
  • What is the right boundary between workflow and user interface. For example, when a student drops the class and that has cascading consequences, should the client or server be responsible for ensuring the student understands those consequences?
  • How deep do we go when returning a resource? For example, when we get a class enrollment, do we return links to the student records or the data about the students? If the latter, what does that communicate to developers about what can and can't be updated in the record?

There are new issues that come up all the time. We're still thinking, designing, and planning, so if you have suggestions, we'd love to hear them.

The effort has been fun and we're anxious to make at least part of this design exercise real. Watch for projects that tackle parts of this enterprise over the coming months.


Further reading:


Suggesting Changes to Google Places

Google Maps Icon Buttons

The Fuse app lets drivers record fuel purchases. One of the features it has is populating a pulldown of nearby gas stations so that the driver doesn't have to type all that in. I buy gas at the Costco in Lehi occasionally and I noticed it wasn't in the list while the Costco in Orem was. I got to wondering how the data for Google Places (the API Fuse uses) is collected and updated.

A quick search of the Costco in Lehi and the on in Orem on Google Maps didn't show any difference. There also wasn't an obvious difference when I drilled into the reviews. It was only when I clicked on the "Edit details" link that I noticed that the Costco in Orem had the following categories:

Optician, Discount Store, Gas Station, Department Store, Wholesale Club,...

whereas the one in Lehi only showed this:

Department Store

Fortunately, Google takes user comments on business listings, so I clicked "add category" and added "Gas Station." The change doesn't show up right away since Google reviews these types of changes.

So, the sequence for updating a business categorization:

  1. Find the business on Google Maps
  2. Click the "N reviews" link (or the "Write a review" link)
  3. Click "Edit details" under "Contact Information"
  4. Click the ">" to the left of "Category" to expose the category entry fields
  5. Edit as needed
  6. Submit

All in all, this is not easy to discover without some poking around. I'd tried looking from my phone a few times and never got far enough to make a change. But, once you know the secret, it's not too hard. I'm surprised that Costco doesn't do a better job of categorizing its stores in things like Google Places.


The Dangers of Internet Voting

I Voted!

Current computer operating systems, Internet protocols, and standard computing practices are inherently insecure. Recent news stories about break-ins at Target, UPS, Community Health Systems, and the Nuclear Regulatory Commission point out all too well both the weaknesses in the system and the power of hackers to take advantage of those weaknesses. Groups mounting attacks include both state and criminal actors. Yet in spite of this inherent insecurity, the Internet has become an indispensable tool for myriad activities. Consider why.

In many cases, we’re able to get around the inherent insecurity of the Internet because the value to be received from attacking a weakness isn’t sufficient to attract the attention of those who exploit these weaknesses for gain.

In other cases there is significant value to be gained by attacking a system. Yet despite that, the use of the Internet for commerce, enterprise systems, data dissemination, and other activities continues to grow because the rewards for using the system outweigh the risks and those risks are mitigated by other factors.

Similarly, Internet voting presents a valuable target for hackers. Elections have consequences and the ability to influence an election is enticing to those who have a stake in the outcome of an election. The list of potential attackers is large: individual hackers, political parties, international criminal organizations, hostile foreign governments, or even terrorists have a stake in the outcome of elections and can be expected to use weaknesses in the voting system to gain influence or simply cause mischief.

Proponents of Internet voting point out the great benefits to be gained from making voting easier. They point out that the Internet has been used to great benefit in other activities. Specifically, the refrain, “if we can shop online, why can’t we vote online?” is frequently heard. After all, online shopping and other activities continue to grow in spite of security problems.

Online voting has three properties that, taken together, set it apart from other online activities like shopping:

  1. Secret ballots are required — We require that people register to vote, but how they vote is kept secret from election officials and others. This measure protects the validity of the vote by making it more difficult to coerce or pay people to vote a particular way and by ensuring people that they won’t have to answer to others for their decisions in the polling booth. Internet voting initiatives have to ensure secret ballots.
  2. Computing environment is uncontrolled — Online voting would have to allow people to vote from their own devices in their own homes or businesses to have the desired impact. But a study in 2010 showed that 48% of 22 million scanned computers were infected with a virus and “over a million and a half [were] infected with crimeware/banker trojans.” Any Internet voting system has to be able to run on a collection of computers that is not only not under the control of the voting authorities, but not wholly under the control of the voter either.
  3. Margin for error is very small — Elections are often decided by very small margins. Unlike a business transaction where the likely hood of fraud can be statistically calculated and then factored in to the cost of doing business, there is no margin in a voting scenario to use in mitigating fraud. The margin for fraud has to be very near zero.

These three properties of voting, taken together, make online voting a very different proposition than other activities that we regularly undertake online.

To see why, consider the problem of ensuring the integrity of the vote. Vote integrity is particularly important because people will not trust a government when they don’t believe that the results of elections are valid.

The only reason we know about security breaches at Target and others is because the system is, by design, transparent and auditable. Even if these companies were unable to prevent an attack, it was abundantly clear after the fact that a breach had occurred. In a voting system, however, the secret ballot and uncontrolled computing environment combine to make auditing the validity of the vote impossible.

To make the online commerce scenario analogous to online voting, the online commerce company would know that a customer had bought something but not what she’d bought or how much she’d spent except in aggregate with other purchases. Further, we have to assume the customer never receives any feedback (like a package) and thus can never verify that the order was received correctly. Under these circumstances, there’s almost no way we could ever assure ourselves that the orders the company were receiving had any correlation to the orders customers were placing.

To see why this is a problem, suppose some group claims to have altered the results of an election after the fact. Whether they have or not is immaterial because there would be no way to prove they had not. Voter confidence in the validity of the vote could be undermined without even going to the trouble of mounting an attack.

I do not believe that we can easily overcome any of these problems in the near future. Further I am confident that none of the present commercial offerings solve these problems. Consequently I believe that the risks of Internet voting sharply outweigh the benefits and will for some time to come. But you need not take my word for it. Numerous computer scientists have come out against Internet voting. In addition, an independent panel examined Internet voting for the Province of British Columbia and concluded:

Do not implement universal Internet voting for either local government or provincial government elections at this time. However if Internet voting is implemented, it should be limited to those voters with specific accessibility challenges. If Internet voting is implemented on a limited basis, jurisdictions need to recognize that the risks to the accuracy of the voting results remain substantial.

I strongly urge the committee to curtail Internet voting initiatives for the time being. The pressure to do something might be great, but having studied the issue, we must be the ones to educate others on why Internet voting is not for Utah.


Fuse Version 1 Candidate Release for 20140815

Colour: The spice of life

On Friday I released new code that serves as the latest candidate for Fuse version 1. Here are some of the things that were included:

  • Maintenance API — the maintenance API was completed. The maintenance API contains queries and services for managing reminders, alerts, and maintenance history.
  • Fix Refresh Token Refreshing — Refresh token refreshing is more robust now and integrated with the "fleet as repository for OAuth tokens" model of linking Fuse to Carvoyant.
  • Refactor Weekly Report — The weekly report now uses a separate function to get the fleet summary. This new function will also be used for generating exportable CSV files for taxes and other use cases.
  • Name changes — some query and event names were changed to be more consistent.

There have also been changes to Joinfuse.com:

  • Add status — the provisioning app now shows the status of the link between vehicles and Carvoyant as well as some basic data about the vehicle.
  • Version choice — there are both product and development version of the service. Joinfuse now recognizes which the user is attached to and uses the correct service.

In addition, the javaScript SDK and it's documentation have been updated to match changes to the underlying service.


Extending and Using Fuse

fuse trio

Fuse is a connected car system. You might have noticed that there are a bunch of connected-car products on the market. There are several things that set Fuse apart from other connected-car products, but there's one that's especially important: Fuse is extensible.

When I say "extensible" I don't just mean that Fuse has an API, although it does have that. Fuse is extensible in four important ways:

  1. Fuse has an API — the Fuse API allows anyone to write applications that use Fuse.
  2. The Fuse API is user-extensible — Anyone can write services that Fuse users can install to extend the capabilities of Fuse.
  3. Fuse is open-source — Not only is Fuse open-source, it's based on an open stack including KRE and CloudOS. Because it's open-source, you can replace or modify it at will
  4. Fuse can be self-hosted — being able to self-host means that people have choices about their data and who controls it.

Fuse's extreme extensibility has important implications:

  • User Control of Data — the architecture necessary to create Fuse's extensibility also supports selective data sharing through relationships with it's owner, manufacturer, drivers, and others.
  • Openness and Interoperability — Fuse is a model for an open, interoperable Internet of Things rather than the closed Compuserve of Things that vendors are currently offering.
  • Future Growth — Fuse can change and grow as connected-car products come and go. Fuse owners are not solely dependent on Kynetx to make Fuse work with new ideas, products, and APIs but can take matters into their own hands.

The following describe some of the ways that Fuse can be extended.

Using the Fuse API to Build an App

Fuse has an API that, much like any other API, accesses the core functions of the Fuse platform. Using the Fuse API, developers can add connected-car features to existing applications or even completely replace the stock Fuse app with something more to their liking.

The Fuse API uses OAuth so that developers can let users link their Fuse fleet to another app or service in a standard way. Fuse provides a JavaScript SDK, but you can use any language to access the API so long as it supports HTTP.

The API isn't, strictly speaking, RESTful. Instead it's an event-query API. This is a result of the Fuse architecture and is necessary to support the more advanced forms of extensibility I describe below. We're still experimenting with Event-Query APIs to determine how best to design, use, and document them.

Extending the API with New Services

Fuse is constructed of persistent compute objects, or picos. You can think of a pico as a container for microservices. There is a pico for the owner, for the fleet, and for each of the vehicles in the system as shown below:

fuse microservice overall

Because each of these picos can contain different sets of services, each behaves differently and presents a unique API to the world. Note that this is not just true of a class of picos (i.e. "owner" picos have a different API than a fleet pico) but of each individual pico as well. That is, my Fuse API could be distinct from yours based on the set of services that are installed. In this way, they feel more like personal computers than traditional Web 2.0 web services that present the same, single, non-extensible API for everyone.

Consequently, developers can build and distribute their own services on the Fuse platform. If the Fuse API doesn't do what you want, you can write an extension of the API that your users can install to enable that service.

One example of why you might do this is to add support for devices besides the Carvoyant devices that Fuse is based on now. You could, for example, add a service to Fuse that uses the Automatic device instead. Because each pico is a separate service container, you would then be able to have Carvoyant devices in some of your vehicles and Automatic devices in others and still see them in a single app with consistent vehicle and fleet functionality.

Replacing Existing Services

A direct consequence of the ability to extend the existing API and the fact that it's open source is the ability to replace parts of it wholesale. This allows keeping the old API for interoperability with other apps while completely changing out the code below it.

You could, for example, fork an existing service ruleset on Github and fix bugs, extend it's functionality. If the Fuse maintenance service doesn't suit your needs, but you want keep the existing API so that apps can continue to use it, it's possible to simply replace the maintenance service with one of your own.

Your new maintenance service could be just for you (i.e. installed on your own Fuse picos) or distributed more widely for other Fuse owners to use.

Hosting Fuse

Our goal with Fuse is to create a connected-car platform that belongs completely to the owner of the vehicles. We've architected Fuse to respect privacy and let people choose what happens to the data about their cars and yet supports sharing and data transferability at the same time.

One important component of owner-control is the ability to self-host, even if most people don't take advantage of it. As an analogy, the fact that I can self-host email, if I choose, is an important part of the control I feel over my email account, even though I might choose to let someone host email for me at the moment.

Fuse is open source and is based on an open-source software stack:

  • Fuse and it's underlying pico architecture are based on an open source pico container system called KRE (GPL license) that runs a language called KRL. KRE runs on Apache. Anyone can install and run KRE anywhere they like.

  • Fuse uses a pico-management system we call CloudOS that will be open-source (as soon as I find time to fix some code issues). CloudOS provides core functionality for picos like lifecycle management, communications, and storage.

  • The software that provides Fuse functionality, the Fuse API, is also open-source (MIT License).

  • The Fuse app is not yet open source, but will be once it's done.

Using these freely available packages, you can not just self-host Fuse, but set up your own Fuse system if you choose.

One important benefit of self hosting is users are not even beholden to the account system that we're using for Fuse. They can use the code to create their own accounts in a system they control.

Conclusions

We've tried to make Fuse the most open, extensible connected-car system available. Extensibility is the key to Fuse giving people better control over their data, being interoperable with a wide variety of services and things, and being able to adapt to future changes.

Fuse and Kynetx

Fuse is an open-source project that's supported by Kynetx. Kynetx is behind KRL, KRL, CloudOS, and Fuse. Kynetx makes money by supporting Fuse and through selling Fuse devices and hosting.

Getting Involved

The easiest way to get involved is to use the API. The Fuse App will be opened up to other developers soon and we welcome help in developing it and adding new features. If you're interested in extending the API or running your own Fuse system, contact me and I'll point you in the right direction.


Blockchain and Bearer Tokens

Bitcoin keychain/keyring and key

One of the problems with most substitutes for email is that they fail to implement a concept that Marc Stiegler of HP writes about in his technical report Rich Sharing for the Web. Narc outlines six features of rich sharing that are captured in this short scenario:

Alice, in a race to her next meeting, turns thunder-struck to Bob and says, “Bob, I just remembered I need to get my daughter Carol’s car to Dave’s repair shop. I’ve got to go to this meeting. Can you take Carol’s car over there?”

Marc's thesis is that email has held on so long because it's one of the few systems that supports all six features of rich sharing. But that's not really what this post is about, so I won't describe that further. You can read Marc's excellent white paper from the link above if that interests you.

As I was contemplating this scenario this morning, I was thinking that part of what makes this work is the idea of bearer tokens. A car key is essentially a bearer token. Anyone who has the key can open, start, and drive the car. Hence Carol's daughter can delegate to Carol by giving her the key, as can Carol to Bob, and then Bob to the mechanic.

The problem with bearer tokens is that they can be easily copied. If I give you the "key" to my account at Dropbox in the form of the OAuth2 bearer token that authorizes access, we both have a copy. In fact you could put it on a web site and everyone would be able to get a copy. OAuth2 bearer tokens don't work like car keys.

The key difference is that car keys are fermions, not dataions. That is, they can be in exactly one place at one time whereas bearer tokens can be in multiple places at the same time. Sure we can make a copy of the car key, but that takes work. Exchanging keys (Carol giving it to Bob) takes work. They have to arrange to meet or something else. The concept of work is critical.

One of the key features of the bitcoin blockchain is that it prevents double spending. That is, even though the data representing a coin can be in many places, only one person can spend it. And, importantly, can only spend it once. This seems like a property we'd want bearer tokens to have.

The simple concept is to put bearer tokens in a distributed ledger like the blockchain so that we only allow the current holder of the token to use it. Checking if someone is the current holder of the token is easy since everyone can have a copy of the ledger. But transferring takes work in the same way that transferring a bitcoin takes work (that's what all the "bitcoin mining" is really accomplishing).

In fact we could probably just use bitcoins as tokens. When Alice authorizes Bob to access her account on my system, I'll send a small amount of bitcoin to Bob. When Bob accesses the system, he presents the coin (note, he just has to show it to my system, not spend it or transfer it) and I can check that it's the right coin and that Bob is the current holder. If Carla presents the token (coin), I can check that she doesn't own it and refuse service.

A few thoughts:

  • There's nothing in this system to prevent Bob from transferring the coin to Carla. That is, after Alice gives the token to Bob, she has no control who he transfers it to.
  • This system costs real money. That is, bitcoins, no matter how small, have value. But that's a feature, not a bug, as the saying goes. The cost makes bearer tokens behave more like physical keys.

This is open loop thinking, but I'd appreciate your thoughts and feedback on how a system like this might work better and what problems it might solve or create.


What Happens to the Data

Silos de Trigueros

Nathan Schor pointed me at an article about Metromile that appeared in TechCrunch recently. Metromile is a per-mile insurance company that uses a OBD II device that you plug in your car. It tracks your vehicle stats, similar to Fuse, Automatic, and other connected car services.

The kicker is that it's free because Metromile is making money by selling per-mile insurance. The more users they have using their device the bigger their potential market for selling insurance. That is made evident by the fact that you can only get the free device if you live in a state where they offer insurance (currently CA, OR, and IL). Otherwise, get in line (until they come to your state, presumably).

I don't know how Metromile is implemented, but I wonder what happens to the data. I'm pretty sure they're using a cellular device (rather than Bluetooth) so that the data is always transmitted to their system even if your phone's not in the car or connected. Does all the data about every trip go to the insurance company? Or some aggregation? What's the algorithm?

These questions are relevant because it's unclear who ultimately owns this data. Users aren't paying for the device or the data, just the insurance. As I wrote in The CompuServe of Things, business models that connect devices to non-substituable services threaten to leave users with little control over the things they own and use.

I believe users ought to be customers who own the data and control where and how it's used. That doesn't mean they can't choose to share it with the insurance company, but they ought to know what's being shared and even be able to substitute one insurance company for another. If every connected car device is associated with a different insuarance company, I can't switch without having to give up access to all the data that's been collected about my car and driving.

Data silos with murky policies about data ownership are all too common. Unfortunately, they lead to a future I don't want to live in. And if you think about it, I'll bet you won't want to live there either.