Last week, I referenced an article in American Banker on the responsibilities of blockchain developers. I focused mainly on the governance angle, but the article makes several pokes at the "decentralization charade" and that's been bothering me. The basic point being that (a) there's no such thing as a blockchain without governance (whether ad hoc or deliberate) and (b) governance means that the ledger isn't truly decentralized.
In Re-imagining Decentralized and Distributed, I make the distinction between distributed and decentralized by stating that decentralized systems are composed of pieces that are not under the control of any single entity. By that definition, DNS, for example, is a pretty good example of a decentralized service since it's composed of servers run by millions of separate organizations around the world, cooperating to map names to IP numbers. There are others including email, the Web, and the Internet itself.
But DNS is clearly subject to some level of governance. The protocol is determined by a standards body. Most of the DNS servers in the world are running an open-source DNS server called BIND that is managed by the Internet System Consortium. Domain names themselves are governed by rules put in place by ICANN. There are a group of people who control, for better or worse, what DNS is and how it works.
So, is DNS decentralized? I maintain that DNS is decentralized, despite a relatively small set of people who, together, govern it. Here's why:
First, we have to recognize that decentralization is a continuum, not a binary proposition. Could we imagine a system for mapping names into IP numbers that is more decentralized? Probably. Could we imagine one less decentralized? Most certainly. And given how DNS is governed, there are a multitude of entities who have to agree to make significant changes to the overall operation of the DNS system.
Second, and more important, the governance of the DNS system is open. Structurally, it's difficult for those who govern DNS to make any large-scale change without everyone knowing about them and, if they choose, objecting.
Third, the kinds of decisions that can be made by the governance bodies are limited, in practice, but the structure of the system, the standards, and the traditions of practice that have grown up around it. For example, there is a well-defined process for handling domain name disputes. Not everyone will be happy with it, but at least it exists and is understood. Dispute resolution, as one example, is not ad hoc, arbitrary, or secret.
Lastly, the DNS system may be governed by a relatively small set of people and organizations, but it's run by literally millions. People running DNS servers have a choice about what server software they run. If enough of them decided to freeze at a particular place because they objected to changes or to fork the code, they could effectively derail an unpopular decision.
Distributed ledgers will have varying levels of decentralization depending on their purpose and their governance model and how that model is made operational. The standard by which they should be judged is not "does any human ever make a decision affecting the ledger" but rather:
Is the ledger as decentralized as we can make it while achieving the ends for which ledger was created?
Is the governance process open? Who can participate? How are the governing entities chosen?
How light is the governance? Are the kinds of decisions the governing bodies can make limited by declared process?
Is the operation of the system dependent of the voluntary participation of entities outside the governing bodies?
Distributed ledgers are young and the methods and modes of governance, along with those entities participating in their governance, are in flux. There are many decisions yet to be made. What's more, there's not one distributed ledger, but many. We're still experimenting with what will work and what won't.
While a perfectly decentralized system may be beyond our reach and even undesirable for many reasons, we can certainly do better than the centralized systems that have grown up on the Web to date. Might we come up with even more decentralized systems in the future? Yes. But that shouldn't stop us from creating the most decentralized systems we can now. And for now, we've seen that governance is necessary. Let's keep it light and open and move forward.
Non-permissioned distributed ledgers like Ethereum will continue to serve important needs, but organizations like banks, insurance companies, credit unions, and others who act as fiduciaries and must meet regulatory requirements, will prefer permissioned ledgers that can provide explicit governance. See Properties of Permissioned and Permissionless Blockchains for more on this.
Governance models for permissioned ledgers should strike a careful balance between what’s in the code and what’s decided by humans. Having everything in code isn’t necessarily the answer. But having humans too heavily involved can open the system up to interference and meddling—both internal and external.
Permissioned ledgers also need to be very clear about what the procedures are for adjudicating problems with the ledger. They can’t be seen as ad hoc or off the cuff. We must have clear dispute resolution procedures and know what disputes the governance system will handle and those it won't.
Governance in permissioned distributed ledgers provides a real solution to some of the ad hoc machinations that have occurred recently with non-permissioned blockchains.
Consider a distributed ledger that provides people (among other principles) with an identity and a place to read and write, securely and privately, various claims. As a distributed ledger, it's not controlled by any single organization and is radically decentralized and distributed.
In the following diagram, the Department of Motor Vehicles has written a driver's license record on the distributed ledger. Later, John is asked to prove his age at Walmart. John is involved in permissioning both the writing and reading of the record. Further, the record is written so that John doesn't have to disclose the entire driver's license, just the fact that he's over 18.
Walmart and the DMV are interacting despite the lack of explicit integration of their systems. They are interacting via the a distributed ledger that provides secure and private claim presentment. Further, John (the person they're talking about) is structurally part of the conversation. I call this sovereign-source integration since it's based on sovereign-source identity.
Even if there were 20 different distributed ledger systems that Walmart had to integrate with, that still less work than integrating with every DMV. And, they can now write receipts when you shop or read transcripts when you apply for a job—all with your permission, of course.
Security and privacy is ensured by the proper application of cryptography, including public-private key pairs, digital signatures, and cryptographic hashes. This isn't easy, but it's doable. There's nothing about the scenario I'm painting that is waiting on some technology revolution. Everything we need is available now.
I wrote a post a few weeks about about how sovereign-source integration helps solve the problems of building a virtual university. In that article, the student profile (including an LRS) is the distributed, personally controlled integration point. The information in the student profile might all be written as claims on a distributed ledger, but they could also be in some off-ledger system that the distributed ledger just points to. Either way, once the student has provide the various institutions participating in the virtual university with their integration point, the various university systems are able to work together through the integration point instead of needing point-to-point integrations.
The world is too big and vast to imagine that we can scale point-to-point integrations to cover every imaginable use case. The opportunities for this architecture in finance, healthcare, egovernment, education, and other areas of human interaction boggle the mind. Sovereign-source integration is a way to cut the Gordian knot.
The students in my lab at BYU are running a booth at OpenWest this year. OpenWest is one of the great open source conferences in the US. There are 1400 people here this year. When the call for papers came out this year, I missed the deadline. Not to worry, I decided to sponsor a booth. That way my students can speak for three days instead of an hour. Here's what they're demoing at OpenWest this week.
A while back, I wrote a blog post about my work with the ESProto sensors from Wovyn. Johannes Ernst responded with an idea he'd had for a little control project in his house. He has a closet with computers in it that sometimes gets too hot. He wanted to automatically control some fans and turn them on when the closet was too hot. I asked my students—Adam Burdett, Jesse Howell, and Nick Angell—to mock up that situation in an old equipment box.
Physically, the box has two pancake fans on the top, a light bulb as a heat source, a ESProto temperature sensor inside the box, and one outside the box. There's a Raspberry Pi that controls the light and fans. The RPi presents an API.
We could just write a little script on the RPi that reads the temperatures and turns fans on or off. But that wouldn't be much fun. And it wouldn't give us an excuse to work on our vision for using picos to create communities of things that cooperate. Granted, this example is small, but we've got to start somewhere.
The overall design uses picos to represent spimes for the physical devices: two fans and two temperature sensors. There is also a pico to represent the community of fans and one to represent the closet, the overall community to which all of these belong. The following diagram illustrates these relationships.
Pico Structure for the Closet Demo
The Fan Collection is an important part of the overall design because it abstracts and encapsulates the individual fans so that the closet can just indicate it wants more or less airflow without knowing the details of how many fans there are, how fans are controlled, whether they're single or variable speed, and so on. The Fan Collection manages those details.
That's not to say that the Fan Collection knows the details of the fans themselves. Those details are abstracted by the Fan picos. The Fan picos present a fairly straightforward representation of the fan and its capabilities.
This demo provides us with a project to use Wrangler. Wrangler is the pico operating system that Pico Labs has been working on for the last year. Wrangler is a follow-on to CloudOS, a pico control system that we built at Kynetx and that was the code underlying Fuse, the connected-car platform we built. Wrangler improves on CloudOS by taking its core concepts and extending and normalizing them.
The primary purpose of Wrangler is pico life cycle management. While the pico engine provides methods for creating and destroying picos, installing rulesets, and creating channels, those operations are low-level—using them is a lot of work.
As an example of how Wrangler improves on the low-level functions in the pico engine, consider pico creation. Creating a useful child pico involves the following steps:
create the child
name the child
install rulesets in the child
initialize the child
link the child to other picos using subscriptions
Wrangler uses the concept of prototypes to automate most of this work. For example, a developer can define a prototype for a temperature sensor pico. Then using Wrangler, temperature sensor picos, with the correct configuration, can be created with a single action. This not only reduces the code a developer has to write, but also reduces configuration errors.
The great thing about going to a conference—as a speaker or an exhibitor—is that it gives you a deadline for things you're working on. OpenWest provided just such an excuse for us. The demo drove thinking and implementation. If you're at OpenWest this week, stop by and see what we've done and ask some questions.
I have a problem: a long time ago, Kynetx built a ruleset management tool called AppBuilder. There are some important rulesets in AppBuilder. I'd like to shut down AppBuilder, but first I need to migrate all the important rulesets to the current ruleset registry. There's just one tiny thing standing in my way: I don't know which rulesets are the important ones.
Sure, I could guess and get most of them. Then I'd just wait for things to break to discover the rest. But that's inelegant.
My first thought was to write some code to instrument the pico engine. I'd increment a counter each time it loads a ruleset. That way I see what's being used. No guessing. I'd need some way to get stuff in the database and get it out.
But then I had a better idea. Why not write intrumentation data into the persistent variable space of a system ruleset. The system ruleset can access and modify any of these variables. And it's flexible. Rather than making changes to the engine and rolling to production each time I change the monitoring, I update the system ruleset.
Right now, there's just one variable: rid_usage. The current system ruleset is simple. But it's a start. All the pieces are in place now to use this connection for monitoring, controlling, and configuring the pico engine.
I like this idea a lot because KRL is being used to implement important services on the platform that implements KRL. Very meta... And when systems start to be defined in their own language, that's a good thing.
I'm now on my second Internet-connected sprinkler controller. The first, a Lono, worked well enough although there were some features missing. Last week, I noticed that the program wasn't running certain zones. I wasn't sure what to do and I couldn't find help from Lono, so I decided I'd try a second one. My second purchase based on both friend's recommendations and reviews on Amazon was a Rachio. I installed it on Saturday.
As I was working on setting up the programs and experimenting with them I noticed that the new sprinkler controller had stopped working. When I went to check on it, I discovered that it was completely dead: no lights, no response.
I rebooted the controller and started over. It got to the same point and sprinkler controller died again. A little thought showed that the Rachio sprinkler controller was dying at exactly the same point that the Lono was failing to complete its program. The problem? A short in one of the circuits.
The Lono and the Rachio both fail at handling failure. The old controller, an Irritrol, just dealt with it and kept right on going. None of them, including the Irritrol, did a good job of telling me that I had a short-circuit.
Building sprinkler controllers is a tough job. The environment is dirty and wet. The valves and sensors are numerous and varied. I don't know about you, but it's a rare year I don't replace valve solenoids or rewrite something. A sprinkler controller has to roll with this environment to pass muster. To be excellent, it has to help with debugging and solving the problems.
Fancy water saving features, cool Web sites, and snaky notifications are fine. But they're like gold-plated bathroom fixtures in a hotel room with dirty sheets if the controller doesn't do it's basic job: run the sprinklers reliably.
In the taxonomy of Bruce Sterling's Shaping Things, the Fitbit is a Gizmo.
"Gizmos" are highly unstable, user-alterable, baroquely multifeatured objects, commonly programmable, with a brief lifespan. Gizmos offer functionality so plentiful that it is cheaper to import features into the object than it is to simplify it. Gizmos are commonly linked to network service providers; they are not stand-alone objects but interfaces. People within an infrastructure of Gizmos are "End-Users."
People buy Fitbits believing that they're buying a thing, but in fact, they're buying a network service. The device is merely the primary interface to that service. The Fitbit is useless without the service. Just a chunk of worthless plastic and silicon.
The device is demanding. We buy Fitbits and then fiddle with them incessantly. Again, to quote Bruce:
...Gizmos have enough functionality to actively nag people. Their deployment demands extensive, sustained interaction: upgrades, grooming, plug-ins, plug-outs, unsought messages, security threats, and so forth.
Sometimes we're messing with them cause we're bored and relieve it with a little configuration. Often we're forced to configure and reconfigure because its not working. We feel guilt over buying something we're not using. Usually, the Fitbit ends up in a drawer unused after the guilt wears off and the pain of configuration overwhelms the perceived benefit.
Fitbit isn't selling things. They probably fancy themselves selling better health or fitness. But, Fitbit is really selling a way to measure, and perhaps analyze, some aspect of your life. They package it up like a traditional product and put it on store shelves, but the thing you buy isn't a traditional product. Without the service and the account underlying it, you have nothing.
Of course, I'm not talking about Fitbit alone. Fitbit is just a well-known example. Everything I've said applies to every current product in the so-called Internet of Things. They are all just interfaces to the real product: a networked service. I say "so-called" because a more appropriate name for the Gizmo ecosystem is CompuServe of Things.
Bruce's book is a trail guide to what comes after Gizmos: something he calls a "spime." Spimes are material instantiations of an immaterial system. They begin and end with data. Spimes will have a different architecture than the CompuServe of Things. To work, they will cooperate and interact in true Internet fashion.
Online, I am Sybil. So are you. You have no digital representation of your individual identity. Rather, you have various identities, disconnected and spread out among the administrative domains of the various services you use.
An independent identity is a prerequisite to being able to act independently. When we are everywhere, we are nowhere. We have no independent identity and are thus constantly subject to the intervening administrative identity systems of the various service providers we use.
Building a self-sovereign identity system changes that. It allows individuals to act and interact as themselves. It allows individuals to have more control over the way they are represented and thus seen online. As the number of things that intermediate our lives explodes, having a digital identity puts you at the center of those interchanges. We gain the power to act instead of being acted upon.
This is why I believe the discussion of online privacy sells us short. Being self-sovereign is about much more than controlling how my personal data is used. That's playing defense and is a cheap substitute for being empowered to act as an individual. Privacy is a mess of pottage compared to the vast opportunities that being an autonomous digital individual enables.
Technically, there are several choice for implementing a self-sovereign identity system. Most come down to one of three choices:
a public, permissionless distributed ledger (blockchain)
a public, permissioned distributed ledger
a private, permissioned distributed ledger1
Public or private refers to who can join—anyone can join a public ledger. A public system allows anyone to get an identity on the ledger. Private system restrict who can join. I owe this categorization to Jason Law.
Permissioned and permissionless refers to how the ledger's validators are chosen. As I discussed in Properties of Permissioned and Permissionless Blockchains, these two types of ledgers provide a different emphasis on the importance of protection from censorship and protection from deletion. People of a more libertarian bent will prefer permissionless because of it's emphasis on protection from censorship while those who need to work within regulatory regimes will prefer permissioned.
We could debate the various benefits of each of these types of self-soveregn identity systems, but in truth they are all preferable to what we have today a each allows individuals to create and control identities independent of the various administrative domains with which people interact. In fact, I suspect that one or more instantiations of each these three types will exist in parallel to serve different needs. Unlike the physical world where we live in just one place, online, we can have a presence in many different worlds. People will use all of these systems and more.
Regardless of the choices we make, the principle that ought to guide the design of self-sovereign identity systems is respect for people as individuals and ensuring they have the ability to act as such.
"On the Net today we face a choice between freedom and captivity, independence and dependence."
I don't believe this is overstated. As more and more of our lives are intermediated by software-based systems, we will only be free if we are free to act as peers of these services. An independent identity is the foundation for that freedom to act.
Imagine you wanted to create a virtual university (VU)1. VU admits students and allows them to take courses in programs that lead to certificates, degrees, and other credentials. But VU doesn't have any faculty or even any courses of its own. VU's programs are constructed from courses at other universities. VU's students take courses at whichever university offers it. In an extreme version of this model, VU doesn't even credential students. Rather, those come from participating institutions who have agreed, on a program-by-program basis, to accept certain transfer credits from other participating universities to fulfill program requirements.
Uber, the world’s largest taxi company, owns no vehicles. Facebook, the world’s most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. And Airbnb, the world’s largest accommodation provider, owns no real estate.
These companies are thin layers sitting on an abundance of supply. They connect customers to that supply. VU follows a similar pattern. VU has no faculty, no campus, no buildings, no sports teams. VU doesn't have any classes of its own. Moreover, as we'll see, VU doesn't even have much student data. VU provides a thin layer of software that connects students anywhere in the world with a vast supply of courses and degree programs available.
There are a lot of questions about how VU would work, but what I'd like to focus on in this post is how we could construct (or, as we'll see later, deconstruct) IT systems that support this model.
Traditional University IT System
Before we consider how VU can operate, let's look at a simple block model of how traditional university IT systems work.
Universities operate three primary IT systems in support of their core business: a learning management system (LMS), a student information system (SIS), and a course management system.2
The LMS is used to host courses and is the primary place that students use course material, take quizzes, and submit assignments. Faculty build courses in the LMS and use it to evaluate student work and assign grades.
The SIS is the system of record for the university and tracks most of the important data about students. The SIS handles student admissions, registrations, and transcripts. The SIS is also the system that a university uses to ensure compliance with various university and government policies, rules, and regulations. The SIS works hand-in-hand with the course management system that the university uses to manage its offerings.
The SIS tells the LMS who's signed up for a particular course. The LMS tells the SIS what grades each student got. The course management system tells the SIS and LMS what courses are being offered.
Students usually interact with the LMS and SIS through Web pages and dedicated mobile apps.
VU presents some challenges to the traditional university IT model. Since these university IT systems are monoliths, you might be able to do some back-end integrations between VU's systems and the SIS and LMS of each participating university. The student would have to then use VU's systems and those of each participating universities.
The Personal API and Learning Records
I've written before about BYU's efforts to build a university API. A university API exposes resources that are directly related to the business of the university such as /students, /instructors, /classes, /enrollments, and so on. Using a standard, consistent API developers can interact with any relevant university system in a way that protects university and student data and ensures that university processes are followed.
We've also been exploring how a personal API functions in a university setting. For purposes or this discussion, let's imagine a personal API that provides an interface to two primary repositories of personal data: the student profile and the learning record store (LRS). The profile is straightforward and contains personal information that the student needs to share with the university in order to be admitted, register for and take courses, and complete program requirements.
The LRS stores the stream of all learning activities by the student. These learning activities include things like being admitted to a program, registering for a class, completing a reading assignment, taking a quiz, attending class, getting a B+ in a class, or completing a program of study. In short there is no learning activity that is too small or too large to be recorded in the LRS. The LRS stores a detailed transcript of learning events.3
One significant contrast between the traditional SIS/LMS that students have used and the LRS is this: the SIS/LMS is primarily a record of the current status of the student that records only course-grained achievements whereas the LRS represents the stream of learning activities, large and small. The distinction is significant. My last book was called The Live Web because it explored the differences between systems that make dynamic queries against static data (the traditional SIS/LMS) and those that perform static queries on dynamic streams of data. The LRS is decidedly part of the live web.
The personal API, as it's name suggests, may provide an interface to any data that the person who owns, but right now, we're primarily interested in the profile and LRS data. For purposes of this discussion, we'll refer to the combination profile and LRS as the "student profile."
We can construct the student profile such that it can be hosted. By hosted, I mean that the student profile is built in a way that each profile could, potentially, be run on different machines in different administrative domains, without loss of functionality. One group of students might be running their profiles inside their university's Domain of One's Own system, another group might be using student profiles hosted by their school, other students might be using a commercial product, and some, intrepid students might choose to self-host. Regardless, the API provides the same functionality independent of the domain in which the student profile operates.
Even when the profile is self hosted, the information can still be trusted because institutions can digitally sign accomplishments so others can be assured they're legitimate.
Deconstructing the SIS
With the personal-API-enabled student profile, we're in a position to deconstruct the University IT systems we discussed above. As shown in the following diagram, the student profile can be separated from the university system. They interact via their APIs. Students interact with both of them through their respective APIs using applications and Web sites.
The university API and the student profile portions of the personal API are interoperable. Each is built so that it knows about and can use the other. For example, the university API knows how to connect to a student profile API, can understand the schema within, respects the student profile's permissioning structures, and sends accomplishments to the LRS along with other important updates.
For its part, the student profile API knows how to find classes, see what classes the student is registered for, receive university notifications, check grades, connect with classmates, and sends important events to the university API.
VU can use both the university systems and the student profile. Students can access all three via their APIs using whatever applications are appropriate.
The VU must manage programs made from courses that other universities teach and keep track of who is enrolled in what (the traditional student records function). But VU can rely on the university's LMS, major functions of its SIS, and information in the student profile to get its job done. For example, if VU trusted that the student profile would be consistently available, it would need to know who its students are, but could evaluate student progress using transcript records written by the university to the student profile.
Building the Virtual University
With this final picture in mind, it's easy to see how multiple universities and different student profile systems could work together as part of VU's overall offerings.
With these systems in place, VU can build programs from courses taught at many universities, relying on them to do much of the work in teaching students and certifying student work. Here is what VU must do:
VU still has the traditional college responsibility of determining what programs to offer, selecting courses that make up those programs, and offering those to its students.
VU must run a student admissions process to determine who to admit to which programs.
VU has the additional task of coordinating with the various universities that are part of the consortium to ensure they will accept each others courses as pre-requisites and, if necessary, as equivalent transfer credits.
VU must evaluate student completion of programs and either issue certifications (degrees, certificates of completion, etc.) itself or through one of its member institutions.
Universities aren't responsible for anything more than they already do. Their IT systems are architected differently to have an API and to interact with the student profile, but otherwise they are very similar in functionality to what is in place now. Accomplishments at each participating institution can be recorded in the student profile.
VU students apply to and are admitted by VU. They register for classes with VU. They look to VU for program guidance. When they take a class, they use the LMS at the university hosting the course. The LMS posts calendar and notifications to their student profile. The student profile becomes the primary system the student uses to interact with both VU and the various university LMSs. They have little to no interaction with the SIS of the respective universities.
One of the advantages of the hosted model for student repositories is that they don't have to be centrally located or administered. As a result student data can be located in different political domains in accordance with data privacy laws.
Note that the student profile is more than a passive repository of data that has limited functionality. The student profile is an active participant in the student's experience, managing notifications and other communications, scheduling calendar items, and even managing student course registration and progress evaluation. The student profile becomes a personal learning environment working on the student's behalf in conjunction with the various learning systems the student uses.
Since the best place to integrate student data is in the student profile, it ought to exist long before college. Then students could use their profile to create their application. There's no reason high school activities and results from standardized testing shouldn't be in the LRS. Student-centric learning requires student-centric information management.
We can imagine that this personal learning environment would be useful outside the context of VU and provide the basis for the student's learning even after she graduates. By extending it to allow programs of learning to be created by the student or others, independent of VU, the student profile becomes a tool that students can use over a lifetime.
The design presented here follows the simple maxim that the student is the best place to integrate information about the student. By deconstructing the traditional centralized university systems, we can create a system that supports a much more flexible model of education. APIs provide the means of modularizing university IT systems and creating a student-centric system that sits at the heart of a new university experience.
Don't construe this post to be "anti-university." In fact, I'm very pro-university and believe that there is great power in the traditional university model. Students get more when they are face-to-face with other learners in a physical space. But that is not always feasible and once students leave the university, their learning is usually much less structured. The tools developed in this post empower students to be life-long learners by making them more responsible for managing their own learning environment.
Universities, like all large organizations, also use things like financial systems, HR systems, and customer management systems, along with a host of other smaller and support IT systems. But I'm going to ignore those in this post since they're really boring and not usually specialized from those used by other businesses.
The xAPI is the proposed way that LRSs interact with each other and other learning systems. xAPI is an event-based protocol that communicates triples that have a subject, verb, and object. Using xAPI, systems can communicate information such as "Phillip completed reading assignment 10." In my more loose interpretation, an LRS might also store information from Activity Streams or other ways that event-like information can be conveyed.