Building Decentralized Applications with Pico Networks

Picos are designed to form heterarchical, or peer-to-peer, networks by connecting directly with each other. Because picos use an actor model of distributed computation, parent-child relationships are very important. When a pico creates another pico, we say that it is the parent and the pico that got created is the child. The parent-child connection allows the creating pico to perform life-cycle management tasks on the newly minted pico such as installing rulesets or even deleting it. And the new pico can create children of its own, and so on.

Building a system of picos for a specific application requires programming them to perform the proper lifecycle management tasks to create the picos that model the application. Wrangler is a ruleset installed in every pico automatically that is the pico operating system. Wrangler provides rules and functions for performing these life-cycle management tasks.

Building a pico application can rarely rely on the hierarchical parent-child relationships that are created as picos are managed. Instead, picos create connections between picos by creating what are called subscriptions, providing bi-directional channels used for raising events to and making queries of the other pico.

This diagram shows a network of temperature sensors built using picos. In the diagram, black lines are parent-child relationships, while pink lines are peer-to-peer relationships between picos.

Temperature Sensor Network
Temperature Sensor Network (click to enlarge)

There are two picos (one salmon and the other green) labeled a "Sensor Community". These are used for management of the temperature sensor picos (which are purple). These community picos are performing life-cycle management of the various sensor picos that are their children. They can be used to create new sensor picos and delete those no longer needed. Their programming determines what rulesets are installed in the sensor picos. Because of the rulesets installed, they control things like whether the sensor pico is active and how often if updates its temperature. These communities might represent different floors of a building or different departments on a large campus.

Despite the fact that there are two different communities of temperature sensors, the pink lines tell us that there is a network of connections that spans the hierarchical communities to create a single connected graph of sensors. In this case, the sensor picos are programmed to use a gossip protocol to share temperature information and threshold violations with each other. They use a CRDT to keep track of the number of threshold violations currently occuring in the network.

The community picos are not involved in the network interactions of the sensor picos. The sensor network operates independently of the community picos and does not rely on them for communication. Astute readers will note that both communities are both children of a "root" pico. That's an artifact of the way I built this, not a requirement. Every pico engine has a root pico that has no parent. These two communities could have been built on different engines and still created a sensor network that spanned multiple communities operating on multiple engines.

Building decentralized networks of picos is relatively easy because picos provide support for many of the difficult tasks. The actor model of picos makes them naturally concurrent without the need for locks. Picos have persistent, independent state so they do not depend on external data stores. Picos have a persistent identity—they exist with a single identity from the time of their creation until they are deleted. Picos are persistently available, always on and ready to receive messages. You can see more about the programming that goes into creating these systems in these lessons: Pico-Based Systems and Pico to Pico Subscriptions.

If you're intrigued and want to get started with picos, there's a Quickstart along with a series of lessons. If you want support, contact me and we'll get you added to the Picolabs Slack.

The pico engine is an open source project licensed under a liberal MIT license. You can see current issues for the pico engine here. Details about contributing are in the repository's README.


Passwords Are Ruining the Web

No Passwords

Compare, for a moment, your online, web experience at your bank with the mobile experience from the same bank. Chances are, if you're like me, that you pick up your phone and use a biometric authentication method (e.g. FaceId) to open it. Then you select the app and the biometrics play again to make sure it's you, and you're in.

On the web, in contrast, you likely end up at a landing page where you have to search for the login button which is hidden in a menu or at the top of the page. Once you do, it probably asks you for your identifier (username). You open up your password manager (a few clicks) and fill the username and only then does it show you the password field1. You click a few more times to fill in the password. Then, if you use multi-factor authentication (and you should), you get to open up your phone, find the 2FA app, get the code, and type it in. To add insult to injury, the ceremony will be just different enough at every site you visit that you really don't develop much muscle memory for it.

As a consequence, when I need somethings from my bank, I pull out my phone and use the mobile app. And it's not just banking. This experience is replicated on any web site that requires authentication. Passwords and the authentication experience are ruining the web.

I wouldn't be surprised to find businesses abandon functional web sites in the future. There will still be some marketing there (what we used to derisively call "brochure-ware") and a pointer to the mobile app. Businesses love mobile apps not only because they can deliver a better user experience (UX) but because they allow business to better engage people. Notifications, for example, get people to look at the app, giving the business opportunities to increase revenue. And some things, like airline boarding passes, just work much better on mobile.

Another factor is that we consider phones to be "personal devices". They aren't designed to be multi-user. Laptops and other devices, on the other hand, can be multi-user, even if in practice they usually are not. Consequently, browsers on laptops get treated as less insecure and session invalidation periods are much shorter, requiring people to login more frequently than in mobile apps.

Fortunately, web sites can be passwordless, relieving some of the pain. Technologies like FIDO2, WebAuthn, and SSI allow for passwordless user experiences on the web as well as mobile. The kicker is that this isn't a trade off with security. Passwordless options can be more secure, and even more interoperable, with a better UX than passwords. Everybody wins.


Notes

  1. This is known as "identifier-first authentication". By asking for the identifier, the authentication service can determine how to authenticate you. So, if you're using a token authentication instead of passwords, it can present that next. Some places do this well, merely hiding the password field using Javascript and CSS, so that password managers can still fill the password even though it's not visible. Others don't.

Photo Credit: Login Window from AchinVerma (Pixabay)


Persistence, Programming, and Picos

Jon Udell introduced me to a fascinating talk by the always interesting r0ml. In it, r0ml argues that Postgres as a programming environment feels like a Smalltalk image (at least that's the part that's germane to this post). Jon has been working this way in Postgres for a while. He says:

For over a year, I’ve been using Postgres as a development framework. In addition to the core Postgres server that stores all the Hypothesis user, group, and annotation data, there’s now also a separate Postgres server that provides an interpretive layer on top of the raw data. It synthesizes and caches product- and business-relevant views, using a combination of PL/pgSQL and PL/Python. Data and business logic share a common environment. Although I didn’t make the connection until I watched r0ml’s talk, this setup hearkens back to the 1980s when Smalltalk (and Lisp, and APL) were programming environments with built-in persistence.
From The Image of Postgres
Referenced 2021-02-05T16:44:56-0700

Here's the point in r0ml's talk where he describes this idea:

As I listened to the talk, I was a little bit nostalgic for my time using Lisp and Smalltalk back in the day, but I was also excited because I realized that the model Jon and r0ml were talking about is very much alive in how one goes about building a pico system.

Picos and Persistence

Picos are persistent compute objects. Persistence is a core feature of how picos work. Picos exhibit persistence in three ways. Picos have1:

  • Persistent identity—Picos exist, with a single identity, continuously from the moment of their creation until they are destroyed.
  • Persistent state—Picos have state that programs running in the pico can see and alter.
  • Persistent availability—Picos are always on and ready to process queries and events.

Together, these properties give pico programming a different feel than what many developers are used to. I often tell my students that programmers write static documents (programs, configuration files, SQL queries, etc.) that create dynamic structures—the processes that those static artifacts create when they're run. Part of being a good programmer is being able to envision those dynamic structures as you program. They come alive in your head as you imagine the program running.

With picos, you don't have to imagine the structure. You can see it. Figure 1 shows the current state of the picos in a test I created for a collection of temperature sensors.

Network of Picos for Temperatures Sensors
Figure 1: Network of Picos for Temperatures Sensors (click to enlarge)

In this diagram, the black lines show the parent-child hierarchy and the dotted pink lines show the peer-to-peer connections between picos (called "subscriptions" in current pico parlance). Parent-child hierarchies are primarily used to manage the picos themselves whereas the heterarchical connections between picos is used for programmatic communication and represent the relationships between picos. As new picos are created or existing picos are deleted, the diagram changes to show the dynamic computing structure that exists at any given time.

Clicking on one of the boxes representing a pico opens up a developer interface that enables interaction with the pico according to the rulesets that have been installed. Figure 2 shows the Testing tab for the develop interface of the io.picolabs.wovyn.router ruleset in the pico named sensor_line after the lastTemperature query has been made. Because this is a live view into the running system, the interface can be used to query the state and raise events in the pico.

Interacting with a Pico
Figure 2: Interacting with a Pico (click to enlarge)

A pico's state is updated by rules running in the pico in response to events that the pico sees. Pico state is made available to rules as persistent variables in KRL, the ruleset programming language. When a rule sets a persistent variable, the state is persisted after the rule has finished execution and is available to other rules that execute later2. The Testing tab allows developers to raise events and then see how that impacts the persistent state of the pico.

Programming Picos

As I said, when I saw r0ml's talk, I was immediately struck by how much programming picos felt like using the Smalltalk or Lisp image. In some ways, it's like working with Docker images in a Fargate-like environment since it's serverless (from the programmer's perspective). But there's far less to configure and set up. Or maybe, more accurately, the setup is linguistically integrated with the application itself and feels less onerous and disconnected.

Building a system of picos to solve some particular problem isn't exactly like using Smalltalk. In particular, in a nod to modern development methodologies, the rulesets are installed from URLs and thus can be developed in the IDE the developer chooses and versioned in git or some other versioning system. Rulesets can be installed and managed programmatically so that the system can be programmed to manage its own configuration. To that point, all of the interactions in developer interface are communicated to the pico via an API installed in the picos. Consequently, everything the developer interface does can be done programmatically as well.

Figure 3 shows the programming workflow that we use to build production pico systems.

Programming Workflow
Figure 3: Programming Workflow (click to enlarge)

The developer may go through multiple iterations of the Develop, Build, Deploy, Test phases before releasing the code for production use. What is not captured in this diagram is the interactive feel that the pico engine provides for the testing phase. While automated tests can test the unit and system functionality of the rules running in the pico, the developer interface provides a visual tool for envisioning the interaction of the picos that are animated by dynamic interactions. Being able to query the state of the picos and see their reaction to specific events in various configurations is very helpful in debugging problems.

A pico engine can use multiple images (one at a time). And an image can be zipped up and shared with another developer or checked into git. By default the pico engine stores the state of the engine, including the installed rulesets, in the ~/.pico-engine/ directory. This is the image. Developers can change this to a different directory by setting the PICO_ENGINE_HOME environment variable. By changing the PICO_ENGINE_HOME environment variable, you can keep different development environments or projects separate from each other, and easily go back to the place you left off in a particular pico application.

For example, you could have a different pico engine image for a game project and an IoT project and start up the pico engine in either environment like so:

# work on my game project
PICO_ENGINE_HOME=~/.dnd_game_image pico-engine

# work on IoT project
PICO_ENGINE_HOME=~/.iot_image pico-engine

Images and Modern Development

At first, the idea of using an image or the running system and interacting with it to develop an application may see odd or out of step with modern development practices. After all, developers have had the idea of layered architectures and separation of concerns hammered into them. And image-based development in picos seems to fly in the face of those conventions. But it's really not all that different.

First, large pico applications are not generally built up by hand and then pushed into production. Rather, the developers in a pico-based programming project create a system that comes into being programmatically. So, the production image is separate from the developer's work image, as one would like. Another way to think about this, if you're familiar with systems like Smalltalk and Lisp is that programmers don't develop systems using a REPL (read-eval-print loop). Rather they write code, install it, and raise events to cause the system to talk action.

Second, the integration of persistence into the application isn't all that unusual when one considers the recent move to microservices, with local persistence stores. I built a production connected-car service called Fuse using picos some years ago. Fuse had a microservice architecture even though it was built with picos and programmed with rules.

Third, programming in image-based systems requires persistence maintenance and migration work, just like any other architecture does. For example, a service for healing API subscriptions in Fuse was also useful when new features, requiring new APIs, were introduced since the healing worked as well for new, as it did existing, API subscriptions. These kinds of rules allowed the production state to migrate incrementally as bugs were fixed and features added.

Image-based programming in picos can be done with all the same care and concern for persistence management and loose coupling as in any other architecture. The difference is that developers and system operators (these days often one and the same) in a pico-based development activity are saved the effort of architecting, configuring, and operating the persistence layer as a separate system. Linguistically incorporating persistence in the rules provides for more flexible use of persistence with less management overhead.

Stored procedures will not likely soon lose their stigma. Smalltalk images, as they were used in the 1980's, are unlikely to find a home in modern software development practices. Nevertheless, picos show that image-based development can be done in a manner consistent with the best practices we use today without losing the important benefits it brings.

Future Work

There are some improvements that should be made to the pico-engine to make image-based development better.

  • Moving picos between engines is necessary to support scaling of pico-based system. It is still too hard to migrate picos from one engine to another. And when you do, the parent-child hierarchy is not maintained across engines. This is a particular problem with systems of picos that have varied ownership.
  • Managing images using environment variables is clunky. The engine could have better support for naming, creating, switching, and deleting images to support multiple project.
  • Bruce Conrad has created a command-line debugging tool that allows declarations (which don't affect state) to be evaluated in the context of a particular pico. This needs functionality could be better integrated into the developer interface.

If you're intrigued and want to get started with picos, there's a Quickstart along with a series of lessons. If you want support, contact me and we'll get you added to the Picolabs Slack.

The pico engine is an open source project licensed under a liberal MIT license. You can see current issues for the pico engine here. Details about contributing are in the repository's README.


Notes

  1. These properties are dependent on the underlying pico engine and the persistence of picos is subject to availability and correct operation of the underlying infrastructure.
  2. Persistent variables are lexically scoped to a specific ruleset to create a closure over the variables. But this state can be accessed programmatically by other rulesets installed in the same pico by using the KRL module facility.


Announcing Pico Engine 1.0

Flowers Generative Art


The pico engine creates and manages picos.1 Picos (persistent compute objects) are internet-first, persistent, actors that are a good choice for building reactive systems—especially in the Internet of Things.

Pico engine is the name we gave to the node.js rewrite of the Kynetx Rules Engine back in 2017. Matthew Wright and Bruce Conrad have been the principal developers of the pico engine.

The 2017 rewrite (Pico Engine 0.X) was a great success. When we started that project, I listed speed, internet-first, small deployment, and attribute-based event authorization as the goals. The 0.X rewrite achieved all of these. The new engine was small enough to be able to be deployed on Raspberry Pi's and other small computers and yet was significantly faster. One test we did on a 2015 13" Macbook Pro handled 44,504 events in over 8000 separate picos in 35 minutes and 19 seconds. The throughput was 21 events per second or 47.6 milliseconds per request.

This past year Matthew and Bruce reimplemented the pico engine with some significant improvements and architectural changes. We've released that as Pico Engine 1.X. This blog post discusses the improvements in Pico Engine 1.X, after a brief introduction of picos so you'll know why you should care.

Picos

Picos support an actor model of distributed computation. Picos have the following three properties. In response to a received message,

  1. picos send messages to other picos—Picos respond to events and queries by running rules. Depending on the rules installed, a pico may raise events for itself or other picos.
  2. picos create other picos—Picos can create and delete other picos, resulting in a parent-child hierarchy of picos.
  3. picos change their internal state (which can affect their behavior when the next message received)—Each pico has a set of persistent variables that can only be affected by rules that run in response to events.

I describe picos and their API and programming model in more detail elsewhere. Event-driven systems, like those built from picos, can be used to create systems that meet the Reactive Manifesto.

Despite the parent-child hierarchy, picos can be arranged in a heterachical network for peer-to-peer communication and computation. As mentioned, picos support direct asynchronous messaging by sending events to other picos. Picos have an internal event bus for distributing those messages to rules installed in the pico. Rules in the pico are selected to run based on declarative event expressions. The pico matches events on the bus with event scenarios declared in the event expressions. Event expressions can specify simple single event matches, or complicated event relationships with temporal ordering. Rules whose event expressions match are scheduled for execution. Executing rules may raise additional events. More detail about the event loop and pico execution model are available elsewhere.

Each pico presents a unique Event-Query API that is dependent on the specific rulesets installed in the pico. Picos share nothing with other picos except through messages exchanged between them. Picos don't know and can't directly access or affect the internal state of another pico.

As a result of their design, picos exhibit the following important properties:

  • Lock-free concurrency—picos respond to messages without locks.
  • Isolation—state changes in one pico cannot affect the state in other picos.
  • Location transparency—picos can live on multiple hosts and so computation can be scaled easily and across network boundaries.
  • Loose coupling—picos are only dependent on one another to the extent of their design.

Pico Engine 1.0

Version 1.0 is a rewrite of pico-engine that introduces major improvements:

  • A more pico-centric architecture that makes picos less dependent on a particular engine.
  • A more module design that supports future improvements and makes the engine code easier to maintain and understand.
  • Ruleset versioning and sharing to facilitate decentralized code sharing.
  • Better, attribute-based channel policies for more secure system architecture.
  • A new UI written in React that uses the event-query APIs of the picos themselves to render.

One of our goals for future pico ecosystems is build not just distributed, but decentralized peer-to-peer systems. One of the features we'd very much like picos to have is the ability to move between engines seamlessly and with little friction. Pico engine 1.X better supports this roadmap.

Figure 1 shows a block diagram of the primary components. The new engine is built on top of two primary modules: pico-framework and select-when.

Pico Engine Modular Architecture
Figure 1: Pico Engine Modular Architecture (click to enlarge)

The pico-framework handles the building blocks of a Pico based system:

  • Pico lifecycle—picos exist from the time they're created until they're deleted.
  • Pico parent/child relationships—Every pico, except for the root pico, has a parent. All picos may have children.
  • Events—picos respond to events based on the rules that are installed in the pico. The pico-framework makes use of the select_when library to create rules that pattern match on event streams.
  • Queries—picos can also respond to queries based on the rulesets that are installed in the pico.
  • Channels—Events and queries arrive on channels that are created and deleted. Access control policies for events and queries on a particular channel are also managed by the pico-framework
  • Rulesets—the framework manages installing, caching, flushing, and sandboxing rulesets.
  • Persistence—all picos have persistence and can manage persistent data. The pico-framework uses Levelup to define an interface for a LevelDB compatible data store and uses it to handle persistence of picos.

The pico-framework is language agnostic. Pico-engine-core combines pico-framework with facilities for rendering KRL, the rule language used to program rulesets. KRL rulesets are compiled to Javascript for pico-framework. Pico-engine-core contains a registry (transparent to the user) that caches compiled rulesets that have been installed in picos. In addition, pico-engine-core includes a number of standard libraries for KRL. Rulesets are compiled to Javascript for execution. The Javascript produced by the rewrite is much more readable than that rendered by the 0.X engine. Because of the new modular design, rulesets written entirely in Javascript can be added to a pico system.

The pico engine combines the pico-engine-core with a LevelDB-compliant persistent store, an HTTP server, a log writer, and a ruleset loader for full functionality.

Wrangler

Wrangler is the pico operating system. Wrangler presents an event-query API for picos that supports programatically managing the pico lifecycle, channels and policies, and rulesets. Every pico created by the pico engine has Wrangler installed automatically to aid in programatically interacting with picos.

One of the goals of the new pico engine was to support picos moving between engines. Picos relied too heavily on direct interaction with the engine APIs in 0.X and thus were more tightly coupled to the engine than is necessary. The 1.0 engine minimizes the coupling to the largest extent possible. Wrangler, written in KRL, builds upon the core functionality provided by the engine to provide developers with an API for building pico systems programmatically. A great example of that is the Pico Engine Developer UI, discussed next.

Pico Engine Developer UI

Another significant change to the pico engine with the 1.0 release was a rewritten Developer UI. In 0.X, the UI was hard coded into the engine. The 1.X UI is a single page web application (SPA) written in React. The SPA uses an API that the engine provides to get the channel identifier (ECI) for the root pico in the engine. The UI SPA uses that ECI to connect to the API implemented by the io.picolabs.pico-engine-ui.krl ruleset (which is installed automatically in every pico).

Figure 2 shows the initial Developer UI screen. The display is the network of picos in the engine. Black lines represent parent-child relationships and form a tree with the root pico at the root. The pink lines are subscriptions between picos—two-way channels formed by exchanging ECIs. Subscriptions are used to form peer-to-peer (heterachical) relationships between picos and do no necessarily have to be on the same engine.

Pico Engine UI
Figure 2: Pico Engine UI (click to enlarge)

When a box representing a pico in the Developer UI is clicked, the display shows an interface for performing actions on the pico as shown in Figure 3. The interface shows a number of tabs.

  • The About tab shows information about the pico, including its parent and children. The interface allows information about the pico to be changed and new children to be created.
  • The Rulesets tab shows any rulesets installed in the pico, allows them to be flushed from the ruleset cache, and for new rulesets to be installed.
  • The Channels tab is used to manage channels and channel policies.
  • The Logging tab shows execution logs for the pico.
  • The Testing tab provides an interface for exercising the event-query APIs that the rulesets installed in the pico provide.
  • The Subscriptions tab provides an interface for managing the pico's subscriptions and creating new ones.
Pico Developer Interface
Figure 3: Pico Developer Interface (click to enlarge)

Because the Developer UI is just using the APIs provided by the pico, everything it does (and more) can be done programatically by code running in the picos themselves. Most useful pico systems will be created and managed programmatically using Wrangler. The Developer UI provides a convenient console for exploring and testing during development. The io.picolabs.pico-engine-ui.krl ruleset can be replaced or augmented by another ruleset the developer installs on the pico to provide a different interface to the pico. Interesting pico-based system will have applications that interact with their APIs to present the user interface. For example, Manifold is a SPA written in React that creates a system of picos for use in IoT applications.

Come Contribute

The pico engine is an open source project licensed under a liberal MIT license. You can see current issues for the pico engine here. Details about contributing are in the repository's README.

In addition to the work on the engine itself, one of the primary workstreams at present is to complete Bruce Conrad's excellent work to use DIDs and DIDComm as the basis for inter-pico communication, called ACA-Pico (Aries Cloud Agent - Pico). We're holding monthly meetings and there's a repository of current work complete with issues. This work is important because it will replace the current subscriptions method of connecting heterarchies of picos with DIDComm. This has the obvious advantages of being more secure and aligned with an important emerging standard. More importantly, because DIDComm is protocological, this will support protocol-based interactions between picos, including credential exchange.

If you're intrigued and want to get started with picos, there's a Quickstart along with a series of lessons. If you want support, contact me and we'll get you added to the Picolabs Slack.


Notes

  1. The pico engine is to picos as the docker engine is to docker containers.

Photo Credit: Flowers Generative Art from dp792 (Pixabay)


Generative Identity

Generative Art Ornamental Sunflower

The Generative Self-Sovereign Internet explored the generative properties of the self-sovereign internet, a secure overlay network created by DID connections. The generative nature of the self-sovereign internet is underpinned by the same kind of properties that make the internet what it is, promising a more secure and private, albeit no less useful, internet for tomorrow.

In this article, I explore the generativity of self-sovereign identity—specifically the exchange of verifiable credentials. One of the key features of the self-sovereign internet is that it is protocological—the messaging layer supports the implementation of protocol-mediated interchanges on top of it. This extensibility underpins its generativity. Two of the most important protocols defined on top of the self-sovereign internet support the exchange of verifiable credentials as we'll see below. Together, these protocols work on top of the the self-sovereign internet to give rise to self-sovereign identity through a global identity metasystem.

Verifiable Credentials

While the control of self-certifying identifiers in the form of DIDs is the basis for the autonomy of the self-sovereign internet, that autonomy is made effective through the exchange of verifiable credentials. Using verifiable credentials, an autonomous actor on the self-sovereign internet can prove attributes to others in a way they can trust. Figure 1 shows the SSI stack. The self-sovereign internet is labeled "Layer Two" in this figure. Credential exchange happens on top of that in Layer Three.

SSI Stack
Figure 1: SSI Stack (click to enlarge)

Figure 2 shows how credentials are exchanged. In this diagram, Alice has DID-based relationships with Bob, Carol, Attestor.org and Certiphi.com. Alice has received a credential from Attestor.org. The credential contains attributes that Attestor.org is willing to attest belong to Alice. For example, Attestor might be her employer attesting that she is an employee. Attestor likely gave her a credential for their own purposes. Maybe Alice uses it for passwordless login at company web sites and services and to purchase meals at the company cafeteria. She might also use it at partner websites (like the benefits provider) to provide shared authentication without federation (and it's associated infrastructure). Attestor is acting as a credential issuer. We call Alice a credential holder in this ceremony. The company and partner websites are credential verifiers. Credential issuance is a protocol that operates on top of the self-sovereign internet.

Credential exchange
Figure 2: Credential Exchange (click to enlarge)

Even though Attestor.org issued the credential to Alice for its own purposes, she holds it in her wallet and can use it at other places besides Attestor. For example, suppose she is applying for a loan and her bank, Certiphi, who wants proof that she's employed and has a certain salary. Alice could use the credential from Attestor to prove to Certiphi that she's employed and that her salary exceeds a given threshold1. Certiphi is also acting as a credential verifier. Credential proof and verification is also protocol that operates on top of the self-sovereign internet. As shown in Figure 2, individuals can also issue and verify credentials.

We say Alice "proved" attributes to Certiphi from her credentials because the verification protocol uses zero knowledge proof to support the minimal disclosure of data. Thus the credential that Alice holds from Attestor might contain a rich array of information, but Alice need only disclose the information that Certiphi needs for her loan. In addition, the proof process ensures that Alice can't be correlated though the DIDs she has shared with others. Attribute data isn't tied to DIDs or the keys that are currently assigned to the DID. Rather than attributes bound to identifiers and keys, Alice's identifiers and keys empower the attributes.

Certiphi can validate important properties of the credential. Certiphi is able to validate the fidelity of the credential by reading the credential definition from the ledger (Layer One in Figure 1), retrieving Attestor's public DID from the credential definition, and resolving it to get Attestor.org's public key to check the credential's signature. At the same time, the presentation protocol allows Certiphi to verify that the credential is being presented by the person it was issued to and that it hasn't been revoked (using a revocation registry store in Layer 1). Certiphi does not need to contact Attestor or have any prior business relationship to verify these properties.

The global identity metasystem, shown as the yellow box in Figure 1, comprises the ledger at Layer 1, the self-sovereign internet at Layer 2, and the credential exchange protocols that operate on top of it. Together, these provide the necessary features and characteristics to support self-sovereign identity.

Properties of Credential Exchange

Verifiable credentials have five important characteristics that mirror how credentials work in the offline world:

  1. Credentials are decentralized and contextual. There is no central authority for all credentials. Every party can be an issuer, a holder, or a verifier. Verifiable credentials can be adapted to any country, any industry, any community, or any set of trust relationships.
  2. Credential issuers decide what data is contained in their credentials. Anyone can write credential schemas to the ledger. Anyone can create a credential definition based on any of these schemas.
  3. Verifiers make their own decisions about which credentials to accept—there's no central authority who determines what credentials are important or which are used for a given purpose.
  4. Verifiers do not need to contact issuers to perform verification—that's what the ledger is for. Credential verifiers don't need to have any technical, contractual, or commercial relationship with credential issuers in order to determine the credentials' fidelity.
  5. Credential holders are free to choose which credentials to carry and what information to disclose. People and organizations are in control of the credentials they hold and to determine what to share with whom.

These characteristics underlie several important properties that support the generativity of credential exchange. Here are the most important:

PrivatePrivacy by Design is baked deep into the architecture of the identity metasystem as reflected by several fundamental architectural choices:

  1. Peer DIDs are pairwise unique and pseudonymous by default to prevent correlation.
  2. Personal data is never written to the ledgers at Layer 1 in Figure 1—not even in encrypted or hashed form. Instead, all private data is exchanged over peer-to-peer encrypted connections between off-ledger agents at Layer 2. The ledger is used for anchoring rather than publishing encrypted data.
  3. Credential exchange has built-in support for zero-knowledge proofs (ZKP) to avoid unnecessary disclosure of identity attributes.
  4. As we saw earlier, verifiers don’t need to contact the issuer to verify a credential. Consequently, the issuer doesn’t know when or where the credential is used.

Decentralized—decentralization follows directly from the fact that no one owns the infrastructure that supports credential exchange. This is the primary criterion for judging the degree of decentralization in a system. Rather, the infrastructure, like that of the internet, is operated by many organizations and people bound by protocol.

Heterarchical—a heterarchy is a "system of organization where the elements of the organization are unranked (non-hierarchical) or where they possess the potential to be ranked a number of different ways." Participants in credential exchange relate to each other as peers and are autonomous.

Interoperable—verifiable credentials have a standard format, readily accessible schemas, and a standard protocols for issuance, proving (presenting), and verification. Participants can interact with anyone else so long as they use tools that follow the standards and protocols. Credential exchange isn't a single, centralized system from a single vendor with limited pieces and parts. Rather, interoperability relies on interchangeable parts, built and operated by various parties. Interoperability supports substitutability, a key factor in autonomy and flexibility.

Substitutable—the tools for issuing, holding, proving, and verifying are available from multiple vendors and follow well-documented, open standards. Because these tools are interoperable, issuers, holders, and verifiers can choose software, hardware, and services without fear of being locked into a proprietary tool. Moreover, because many of the attributes the holder needs to prove (e.g. email address or even employer) will be available on multiple credentials, the holder can choose between credentials. Usable substitutes provide choice and freedom.

Flexible—closely related to substitutability, flexibility allows people to select appropriate service providers and features. No single system can anticipate all the scenarios that will be required for billions of individuals to live their own effective lives. The characteristics of credential exchange allow for context-specific scenarios.

Reliable and Censorship Resistant—people, businesses, and others must be able to exchange credentials without worrying that the infrastructure will go down, stop working, go up in price, or get taken over by someone who would do them harm. Substitutability of tools and credentials combined with autonomy makes the system resistant to censorship. There is no hidden third party or intermediary in Figure 2. Credentials are exchanged peer-to-peer.

Non-proprietary and Open—no one has the power to change how credentials are exchanged by fiat. Furthermore, the underlying infrastructure is less likely to go out of business and stop operation because its maintenance and operation are decentralized instead of being in the hands of a single organization. The identity metasystem has the same three virtues of the Internet that Doc Searls and Dave Weinberger enumerated as NEA: No one owns it, Everyone can use it, and Anyone can improve it. The protocols and code that enable the metasystem are open source and available for review and improvement.

Agentic—people can act as autonomous agents, under their self-sovereign authority. The most vital value proposition of self-sovereign identity is autonomy—not being inside someone else's administrative system where they make the rules in a one-sided way. Autonomy requires that participants interact as peers in the system, which the architecture of the metasystem supports.

Inclusive—inclusivity is more than being open and permissionless. Inclusivity requires design that ensures people are not left behind. For example, some people cannot act for themselves for legal (e.g. minors) or other (e.g. refugees) reasons. Support for digital guardianship ensures that those who cannot act for themselves can still participate.

Universal—successful protocols eat other protocols until only one survives. Credential exchange, built on the self-sovereign internet and based on protocol, has network effects that drive interoperability leading to universality. This doesn't mean that the metasystem will be mandated. Rather, one protocol will mediate all interaction because everyone in the ecosystem will conform to it out of self-interest.

The Generativity of Credential Exchange

Applying Zittrain's framework for evaluating generativity is instructive for understanding the generative properties of self-sovereign identity.

Capacity for Leverage

In Zittrain's words, leverage is the extent to which an object "enables valuable accomplishments that otherwise would be either impossible or not worth the effort to achieve." Leverage multiplies effort, reducing the time and cost necessary to innovate new capabilities and features.

Traditional identity systems have been anemic, supporting simple relationships focused on authentication and a few basic attributes their administrators need. They can't easily be leveraged by anyone but their owner. Federation through SAML or OpenID Connect has allowed the authentication functionality to be leveraged in a standard way, but authentication is just a small portion of the overall utility of a digital relationship.

One example of the capacity of credential exchange for leverage is to consider that it could be the foundation for a system that disintermediates platform companies like Uber, AirBnB, and the food delivery platforms. Platform companies build proprietary trust frameworks to intermediate exchanges between parties and charging exorbitant rents for what ought to be a natural interaction among peers. Credential exchange can open these trust frameworks up to create open marketplaces for services.

The next section on Adaptability lists a number of uses for credentials. The identity metasystem supports all these use cases with minimal development work on the part of issuers, verifiers, and holders. And because the underlying system is interoperable, an investment in the tools necessary to solve one identity problem with credentials can be leveraged by many others without new investment. The cost to define a credential is very low (often less than $100) and once the definition is in place, there is no cost to issue credentials against it. A small investment can allow an issuer to issue millions of credentials of different types for different use cases.

Adaptability

Adaptability can refer to a technology's ability to be used for multiple activities without change as well as its capacity for modification in service of new use cases. Adaptability is orthogonal to a technology's capacity for leverage. An airplane, for example, offers incredible leverage, allowing goods and people to be transported over long distances quickly. But airplanes are neither useful in activities outside transportation or easily modified for different uses. A technology that supports hundreds of use cases is more generative than one that is useful in only a few.

Identity systems based on credential exchange provide people with the means of operationalizing their online relationships by providing them the tools for acting online as peers and managing the relationships they enter into. Credential exchange allows for ad hoc interactions that were not or cannot be imagined a priori.

The flexibility of credentials ensures they can be used in a variety of situations. Every form or official piece of paper is a potential credential. Here are a few examples of common credentials:

  • Employee badges
  • Drivers license
  • Passport
  • Wire authorizations
  • Credit cards
  • Business registration
  • Business licenses
  • College transcripts
  • Professional licensing (government and private)

But even more important, every bundle of data transmitted in a workflow is a potential credential. Since credentials are just trustworthy containers for data, there are many more use cases that may not be typically thought of as credentials:

  • Invoices and receipts
  • Purchase orders
  • Airline or train ticket
  • Boarding pass
  • Certificate of authenticity (e.g. for art, other valuables)
  • Gym (or any) membership card
  • Movie (or any) tickets
  • Insurance cards
  • Insurance claims
  • Titles (e.g. property, vehicle, etc.)
  • Certificate of provenance (e.g. non-GMO, ethically sourced, etc.)
  • Prescriptions
  • Fractional ownership certificates for high value assets
  • CO2 rights and carbon credit transfers
  • Contracts

Since even a small business might issue receipts or invoices, have customers who use the company website, or use employee credentials, most businesses will define at least one credential and many will need many more. There are potentially tens of millions of different credential types. Many will use common schemas but each credential from a different issuer constitutes a different identity credential for a different context.

With the ongoing work in Hyperledger Aries, these use cases expand even further. With a “redeemable credentials” feature, holders can prove possession of a credential in a manner that is double-spend proof without a ledger. This works for all kinds of redemption use cases like clocking back in at the end of a shift, voting in an election, posting an online review, or redeeming a coupon.

The information we need in any given relationship varies widely with context. Credential exchange protocols must be flexible enough to support many different situations. For example, in You've Had an Automobile Accident, I describe a use case that requires the kinds of ad hoc, messy, and unpredictable interactions that happen all the time in the physical world. Credential exchange readily adapts to these context-dependent, ad hoc situations.

Ease of Mastery

Ease of mastery refers to the capacity of a technology to be easily and broadly adapted and adopted. One of the core features of credential exchange on the identity metasystem is that supports the myriad use cases described above without requiring new applications or user experiences for each one. The digital wallet that is at the heart of credential exchange activities on the self-sovereign internet supports two primary artifacts and the user experiences to manage them: connections and credentials. Like the web browser, even though multiple vendors provide digital wallets, the underlying protocol informs a common user experience.

A consistent user experience doesn’t mean a single user interface. Rather the focus is on the experience. As an example, consider an automobile. My grandfather, who died in 1955, could get in a modern car and, with only a little instruction, successfully drive it. Consistent user experiences let people know what to expect so they can intuitively understand how to interact in any given situation regardless of context.

Accessibility

Accessible technologies are easy to acquire, inexpensive, and resistant to censorship. Because of it's openness, standardization, and support by multiple vendors, credential exchange is easily available to anyone with access to a computer or phone with an internet connection. But we can't limit its use to individuals who have digital access and legal capacity. Ensuring that technical and legal architectures for credential exchange support guardianship and use on borrowed hardware can provide accessibility to almost everyone in the world.

The Sovrin Foundation's Guardianship Working working group has put significant effort into understanding the technical underpinnings (e.g., guardianship and delegation credentials), legal foundations (e.g., guardianship contracts), and business drivers (e.g., economic models for guardianship). They have produced an excellent whitepaper on guardianship that "examines why digital guardianship is a core principle for Sovrin and other SSI architectures, and how it works from inception to termination through looking at real-world use cases and the stories of two fictional dependents, Mya and Jamie."

Self-Sovereign Identity and Generativity

In What is SSI?, I made the claim that SSI requires decentralized identifiers, credential exchange, and autonomy for participants. Dick Hardt pushed back on that a bit and asked me if decentralized identifiers were really necessary? We had a several fun discussions on that topic.

In that article, I unfortunately used decentralized identifiers and verifiable credentials as placeholders for their properties. Once I started looking at properties, I realized that generative identity can't be built on an administrative identity system. Self-sovereign identity is generative not only because of the credential exchange protocols but also because of the properties of the self-sovereign internet upon which those protocols are defined and operate. Without the self-sovereign internet, enabled through DIDComm, you might implement something that works as SSI, but it won't provide the leverage and adaptability necessary to creating a generative ecosystem of uses that creates the network effects needed to propel it to ubiquity.

Our past approach to digital identity has put us in a position where people's privacy and security are threatened by the administrative identity architecture it imposes. Moreover, limiting its scope to authentication and a few unauthenticated attributes, repeated across thousands of websites with little interoperability, has created confusion, frustration, and needless expense. None of the identity systems in common use today offer support for the same kind of ad hoc attribute sharing that happens everyday in the physical world. The result has been anything but generative. Entities who rely on attributes from several parties must perform integrations with all of them. This is slow, complex, and costly, so it typically happens only for high-value applications.

An identity metasystem that supports protocol-mediated credential exchange running on top of the self-sovereign internet solves these problems and promises generative identity for everyone. By starting with people and their innate autonomy, generative identity supports online activities that are life-like and natural. Generative identity allows us to live digital lives with dignity and effectiveness, contemplates and addresses the problems of social inclusion, and supports economic access for people around the globe.

Notes

  1. For Alice to prove things about her salary, Attestor would have to include that in the credential they issue to Alice.

Photo Credit: Generative Art Ornamental Sunflower from dp792 (Pixabay)


The Generative Self-Sovereign Internet

Seed Germinating

This is part one of a two part series on the generativity of SSI technologies. This article explores the properties of the self-sovereign internet and makes the case that they justify its generativity claims. The second part will explore the generativity of verifiable credential exchange, the essence of self-sovereign identity.

In 2005, Jonathan Zitrain wrote a compelling and prescient examination of the generative capacity of the Internet and its tens of millions of attached PCs. Zittrain defined generativity thus:

Generativity denotes a technology’s overall capacity to produce unprompted change driven by large, varied, and uncoordinated audiences.

Zittrain masterfully describes the extreme generativity of the internet and its attached PCs, explains why openness of both the network and the attached computers is so important, discusses threats to the generativity nature of the internet, and proposes ways that the internet can remain generative while addressing some of those threats. While the purpose of this article is not to review Zittrain's paper in detail, I recommend you take some time to explore it.

Generative systems use a few basic rules, structures, or features to yield behaviors that can be extremely varied and unpredictable. Zittrain goes on to lay out the criteria for evaluating the generativity of a technology:

Generativity is a function of a technology’s capacity for leverage across a range of tasks, adaptability to a range of different tasks, ease of mastery, and accessibility.

This sentence sets forth four important criteria for generativity:

  1. Capacity for Leverage—generative technology makes difficult jobs easier—sometimes possible. Leverage is measured by the capacity of a device to reduce effort.
  2. Adaptability—generative technology can be applied to a wide variety of uses with little or no modification. Where leverage speaks to a technology's depth, adaptability speaks to its breadth. Many very useful devices (e.g. airplanes, saws, and pencils) are nevertheless fairly narrow in their scope and application.
  3. Ease of Mastery—generative technology is easy to adopt and adapt to new uses. Many billions of people use a PC (or mobile device) to perform tasks important to them without significant skill. As they become more proficient in its use, they can apply it to even more tasks.
  4. Accessibility—generative technology is easy to come by and access. Access is a function of cost, deployment, regulation, monopoly power, secrecy, and anything else which introduces artificial scarcity.

The identity metasystem I've written about in the past is composed of several layers that provide its unique functionality. This article uses Zittrain's framework, outlined above, to explore the generativity of what I've called the Self-Sovereign Internet, the second layer in the stack shown in Firgure 1. A future article will discuss the generativity of credential exchange at layer three.

SSI Stack
Figure 1: SSI Stack (click to enlarge)

The Self-Sovereign Internet

In DIDComm and the Self-Sovereign Internet, I make the case that the network of relationships created by the exchange of decentralized identifiers (layer 2 in Figure 1) forms a new, more secure layer on the internet. Moreover, the protocological properties of DIDComm make that layer especially useful and flexible, mirroring the internet itself.

This kind of "layer" is called an overlay network. An overlay network comprises virtual links that correspond to a path in the underlying network. Secure overlay networks rely on an identity layer based on asymmetric key cryptography to ensure message integrity, non-repudiation, and confidentiality. TLS (HTTPS) is a secure overlay, but it is incomplete because it's not symmetrical. Furthermore, it's relatively inflexible because it overlays a network layer using a client-server protocol1.

In Key Event Receipt Infrastructure (KERI) Design, Sam Smith makes the following important point about secure overlay networks:

The important essential feature of an identity system security overlay is that it binds together controllers, identifiers, and key-pairs. A sender controller is exclusively bound to the public key of a (public, private) key-pair. The public key is exclusively bound to the unique identifier. The sender controller is also exclusively bound to the unique identifier. The strength of such an identity system based security overlay is derived from the security supporting these bindings.
From Key Event Receipt Infrastructure (KERI) Design
Referenced 2020-12-21T11:08:57-0700

Figure 2 shows the bindings between these three components of the secure overlay.

Figure 1: Binding of controller, authentication factors, and identifiers in identity systems.
Figure 2: Binding of controller, authentication factors, and identifiers that provide the basis for a secure overlay network. (click to enlarge)

In The Architecture of Identity Systems, I discuss the strength of these critical bindings in various identity system architectures. The key point for this discussion is that the peer-to-peer network created by peer DID exchanges constitute an overlay with an autonomic architecture, providing not only the strongest possible bindings between the controller, identifiers, and authentication factors (public key), but also not needing an external trust basis (like a ledger) because they are self-certifying.

DIDs allow us to create cryptographic relationships, solving significant key management problems that have plagued asymmetric cryptography since it's inception. Consequently, regular people can use a general purpose secure overlay network based on DIDs. The DID network that is created when people use these relationships provides a protocol, DIDComm, that is every bit as flexible and useful as is TCP/IP.

Consequently, communications over a DIDComm-enabled peer-to-peer network are as generative as the internet itself. Thus, the secure overlay network formed by DIDComm connections represents a self-sovereign internet, emulating the underlying internet's peer-to-peer messaging in a way that is both secure and trustworthy2 without the need for external third parties3.

Properties of the Self-Sovereign Internet

In World of Ends, Doc Searls and Dave Weinberger enumerate the internet's three virtues:

  1. No one owns it.
  2. Everyone can use it.
  3. Anyone can improve it.

These virtues apply to the self-sovereign internet as well. As a result, the self-sovereign internet displays important properties that support it's generativity. Here are the most important:

Decentralized—decentralization follows directly from the fact that no one owns it. This is the primary criterion for judging the degree of decentralization in a system.

Heterarchical—a heterarchy is a "system of organization where the elements of the organization are unranked (non-hierarchical) or where they possess the potential to be ranked a number of different ways." Nodes in a DIDComm-based network relate to each other as peers. This is a heterarchy; there is no inherent ranking of nodes in the architecture of the system.

Interoperable—regardless of what providers or systems we use to connect to the self-sovereign internet, we can interact with any other principles who are using it so long as they follow protocol4.

Substitutable—The DIDComm protocol defines how systems that use it must behave to achieve interoperability. That means that anyone who understands the protocol can write software that uses DIDComm. Interoperability ensure that we can operate using a choice of software, hardware, and services without fear of being locked into a proprietary choice. Usable substitutes provide choice and freedom.

Reliable and Censorship Resistant—people, businesses, and others must be able to use the secure overlay network without worrying that it will go down, stop working, go up in price, or get taken over by someone who would do it and those who use it harm. This is larger than mere technical trust that a system will be available and extends to the issue of censorship.

Non-proprietary and Open—no one has the power to change the self-sovereign internet by fiat. Furthermore, it can't go out of business and stop operation because its maintenance and operation are distributed instead of being centralized in the hands of a single organization. Because the self-sovereign internet is an agreement rather than a technology or system, it will continue to work.

The Generativity of the Self-Sovereign Internet

Applying Zittrain's framework for evaluating generativity is instructive for understanding the generative properties of the self-sovereign internet.

Capacity for Leverage

In Zittrain's words, leverage is the extent to which an object "enables valuable accomplishments that otherwise would be either impossible or not worth the effort to achieve." Leverage multiplies effort, reducing the time and cost necessary to innovate new capabilities and features. Like the internet, DIDComm's extensibility through protocols enables the creation of special-purpose networks and data distribution services on top of it. By providing a secure, stable, trustworthy platform for these services, DIDComm-based networks reduce the effort and cost associated with these innovations.

Like a modern operating system's application programming interface (API), DIDComm provides a standardized platform supporting message integrity, non-repudiation, and confidentiality. Programmers get the benefits of a trusted message system without need for expensive and difficult development.

Adaptability

Adaptability can refer to a technology's ability to be used for multiple activities without change as well as its capacity for modification in service of new use cases. Adaptability is orthogonal to capacity for leverage. An airplane, for example, offers incredible leverage, allowing goods and people to be transported over long distances quickly. But airplanes are neither useful in activities outside transportation or easily modified for different uses. A technology that supports hundreds of use cases is more generative than one that is useful in only a few.

Like TCP/IP, DIDComm makes few assumptions about how the secure messaging layer will be used. Thus the network formed by the nodes in a DIDComm network can be adapted to any number of applications. Moreover, because a DIDComm-based network is decentralized and self-certifying, it is inherently scalable for many uses.

Ease of Mastery

Ease of use refers to the ability of a technology to be easily and broadly adapted and adopted. The secure, trustworthy platform of the self-sovereign internet allows developers to create applications without worrying about the intricacies of the underlying cryptography or key management.

At the same time, because of its standard interface and protocol, DIDComm-based networks can present users with a consistent user experience that reduces the skill needed to establish and use connections. Just like a browser presents a consistent user experience on the web, a DIDComm agent can present users with a consistent user experience for basic messaging, as well as specialized operations that run over the basic messaging system.

Of special note is key management, which has been the Achilles heal of previous attempts at secure overlay networks for the internet. Because of the nature of decentralized identifiers, identifiers are separated from the public key, allowing the keys to be rotated when needed without also needing to refresh the identifier. This greatly reduces the need for people to manage or even see keys. People focus on the relationships and the underlying software manages the keys.5

Accessibility

Accessible technologies are easy to acquire, inexpensive, and resistant to censorship. DIDComm's accessibility is a product of its decentralized and self-certifying nature. Protocols and implementing software are freely available to anyone without intellectual property encumbrances. Multiple vendors, and even open-source tools can easily use DIDComm. No central gatekeeper or any other third party is necessary to initiate a DIDComm connection in service of a digital relationship. Moreover, because no specific third parties are necessary, censorship of use is difficult.

Conclusion

Generativity provides decentralized actors to create cooperating, complex structures and behavior. No one person or group can or will think of all the possible uses, but each is free to adapt the system to their own use. The architecture of the self-sovereign internet exhibits a number of important properties. The generativity of the self-sovereign internet depends on those properties. The true value of the self-sovereign internet is that it provides an leveragable, adaptable, usable, accessible, and stable platform upon which others can innovate.


Notes

  1. Implementing general-purpose messaging on HTTP is not straightforward, especially when combined with non-routable IP addresses for many clients. On the other hand, simulating client-server interactions on a general-purpose messaging protocol is easy.
  2. I'm using "trust" in the cryptographic sense, not in the reputational sense. Cryptography allows us to trust the fidelity of the communication but not its content.
  3. Admittedly, the secure overlay is running on top of a network with a number of third parties, some benign and others not. Part of the challenge of engineering a functional secure overlay with self-sovereignty it mitigating the effects that these third parties can have within the self-sovereign internet.
  4. Interoperability is, of course, more complicated than merely following the protocols. Daniel Hardman does an excellent job of discussing this for verifiable credentials (a protocol that runs over DIDComm), in Getting to Practical Interop With Verifiable Credentials.
  5. More details about some of the ways software can greatly reduce the burden of key management when things go wrong can be found in What If I Lose My Phone? by Daniel Hardman.

Photo Credit: Seed Germination from USDA (CC0)


Relationships in the Self-Sovereign Internet of Things

F-150

In The Self-Sovereign Internet of Things, I introduced the role that Self-Sovereign Identity (SSI) can play in the internet of things (IoT). The self-sovereign internet of things (SSIoT) relies on the DID-based relationships that SSI provides, and their support for standardized protocols running over DIDComm, to create an internet of things that is much richer, secure, and privacy respecting than the CompuServe of Things we're being offered today. In this post, I extend the use cases I offered in the previous post and discuss the role the heterarchical relationships found in the SSIoT play.

For this post, we're going to focus on Alice's relationship with her F-150 truck and its relationships with other entities. Why a vehicle? Because in 2013 and 2014 I built a commercial connected car product called Fuse that used the relationship-centric model I'm discussing here1. In addition, vehicles exist in a rich, complicated ecosystem that offers many opportunities for interesting relationships. Figure 1 shows some of these.

Vehicle relationships
Figure 1: Vehicle relationships (click to enlarge)

The most important relationship that a car has is with its owner. But there's more than one owner over the car's lifetime. At the beginning of its life, the car's owner is the manufacturer. Later the car is owned by the dealership, and then by a person or finance company. And, of course, cars are frequently resold. Over the course of its lifetime a car will have many owners. Consequently, the car's agent must be smart enough to handle these changes in ownership and the resulting changes in authorizations.

In addition to the owner, the car has relationships with other people: drivers, passengers, and pedestrians. The nature of relationships change over time. For example, the car probably needs to maintain a relationship with the manufacturer and dealer even after they are no longer owners. With these changes to the relationship come changes in rights and responsibilities.

In addition to relationships with owners, cars also have relationships with other players in the vehicle ecosystem including: mechanics, gas stations, insurance companies, finance companies, and government agencies. Vehicles exchange data and money with these players over time. And the car might have relationships with other vehicles, traffic signals, the roadway, and even potholes.

The following sections discuss three scenarios involvoing Alice, the truck, and other people, institutions, and things.

Multiple Owners

One of the relationship types that the CompuServe of Things fails to handle well is multiple owners. Some companies try and others just ignore it. The problem is that when the service provider intermediates the connection to the thing, they have to account for multiple owners and allow those relationships to change over time. For a high-value product, the engineering effort is justified, but for many others, it simple doesn't happen.

Multiple Owners
Figure 2: Multiple Owners (click to enlarge)

Figure 2 shows the relationships of two owners, Alice and Bob, with the truck. The diagram is simple and hides some of the complexity of the truck dealing with multiple owners. But as I discuss in Fuse with Two Owners some of this is simply ensuring that developers don't assume a single owner when they develop services. The infrastructure for supporting it is built into DIDComm, including standardized support for sub protocols like Introduction.

Lending the Truck

People lend things to friends and neighbors all the time. And people rent things out. Platforms like AirBnB and Outdoorsy are built to support this for high value rentals. But what if we could do it for anything at any time without an intermediating platform? Figure 3 shows the relationships between Alice and her friend Carol who wants to borrow the truck.

Borrowing the Truck
Figure 3: Borrowing the Truck (click to enlarge)

Like the multiple owner scenario, Alice would first have a connection with Carol and introduce her to the truck using the Introduction sub protocol. The introduction would give the truck permission to connect to Carol and also tell the truck's agent what protocols to expose to Carol's agent. Alice would also set the relationship's longevity. The specific permissions that the "borrower" relationship enables depend, of course, on the nature of the thing.

The data that the truck stores for different activities is dependent on these relationships. For example, the owner is entitled to know everything, including trips. But someone who borrows the car should be able to see their trips, but not those of other drivers. Relationships dictate the interactions. Of course, a truck is a very complicated thing in a complicated ecosystem. Simpler things, like a shovel might simply be keeping track of who has the thing and where it is. But, as we saw in The Self-Sovereign Internet of Things, there is value in having the thing itself keep track of its interactions, location, and status.

Selling the Truck

Selling the vehicle is more complicated than the previous scenarios. In 2012, we prototyped this scenario for Swift's Innotribe innovations group and presented it at Sibos. Heather Vescent of Purple Tornado created a video that visualizes how a sale of a motorcycle might happen in a heterarchical DIDComm environment2. You can see a screencast of the prototype in operation here. One important goal of the prototype was to support Doc Searls's vision of the Intention Economy. In what follows, I've left out some of the details of what we built. You can find the complete write-up in Buying a Motorcycle: A VRM Scenario using Personal Clouds.

Selling the Truck
Figure 4: Selling the Truck (click to enlarge)

In Figure 4, Alice is selling the truck to Doug. I'm ignoring how Alice and Doug got connected3 and am just focusing on the sale itself. To complete the transaction, Alice and Doug create a relationship. They both have relationships with their respective credit unions where Doug initiates and Alice confirms the transaction. At the same time, Alice has introduced the truck to Doug as the new owner.

Alice, Doug, and the truck are all connected to the DMV and use these relationships to transfer the title. Doug can use his agent to register the truck and get plates. Doug also has a relationship with his insurance company. He introduces the truck to the insurance company so it can serve as the service intermediary for the policy issuance.

Alice is no longer the owner, but the truck knows things about her that Doug shouldn't have access to and she wants to maintain. We can create a digital twin of the truck that is no long attached to the physical device, but has a copy of all the trip and maintenance information that Alice had co-created with the truck over the years she owned it. This digital twin has all the same functionality for accessing this data that the truck did. At the same time, Alice and Doug can negotiate what data also stays on the truck. Doug likely doesn't care about her trips and fuel purchases, but might want the maintenance data.

Implementation

A few notes on implementation:

  • The relationships posited in these use cases are all DIDComm-capable relationships.
  • The workflows in these scenarios use DIDComm messaging to communicate. I pointed out several places where the Introduction DIDComm protocol might be used. But there might be other DIDComm protocols defined. For example, we could imagine workflow-specific messages for the scenario where Carol borrows the truck. The scenario where Doug buys the truck is rife with possibilities for protocols on DIDComm that would standardize many of the interactions. Standardizing these workflows through protocol (e.g., a common protocol for vehicle registration) reduces the effort for participants in the ecosystem.
  • Some features, like attenuated permissions on channel are a mix of capabilities. DIDComm supports a Discovery protocol that allows Alice, say, to determine if Doug is open to engaging in a sale transaction. Other permissioning would be done by the agent outside the messaging system.
  • The agents I'm envisioning here are smart, rule-executing agents like those available in picos. Picos provide a powerful model for how a decentralized, heterarchical, interoperable internet of things can be built. Picos provide an DIDComm agent programming platform that is easily extensible. Picos live on an open-source pico engine that can run on anything that supports Node JS. They have been used to build and deploy several production systems, including the Fuse connected-car system discussed above.

Conclusion

DIDComm-capable agents can be used to create a sophisticated relationship network that includes people, institutions, things and even soft artifacts like interaction logs. The relationships in that network are rich and varied—just like relationships in the real world. Things, whether they are capable of running their own agents or employ a soft agent as a digital twin, are much more useful when they exist persistently, control their own agent and digital wallet, and can act independently. Things now react and respond to messages from others in the relationship network as they autonomously follow their specific rules.

Everything I've discussed here and in the previous post are doable now. By removing the intermediating administrative systems that make up the CompuServe of Things and moving to a decentralized, peer-to-peer architecture we can unlock the tremendous potential of the Self-Sovereign Internet of Things.


Notes

  1. Before Fuse, we'd built a more generalized IoT system based on a relationship network called SquareTag. SquareTag was a social product platform (using the vernacular of the day) that promised to help companies have a relationship with their customers through the product, rather than merely having information about them. My company, Kynetx, and others, including Drummond Reed, were working to introduce something we called "personal clouds" that were arrayed in a relationship network. We built this on a actor-like programming model called "picos". The pico engine and programming environment are still available and have been updated to provide DID-based relationships and support for DIDComm.
  2. In 2012, DIDComm didn't exist of course. We were envisioning something that Innotribe called the Digital Asset Grid (DAG) and speaking about "personal clouds" but the envisioned operation of the DAG was very much like what exists now in the DIDComm-enabled peer-to-peer network enabled by DIDs.
  3. In the intentcasting prototype, Alice and Doug would have found each other through a system that matches Alice's intent to buy with Doug's intent to sell. But a simpler scenario would have Alice tell the truck to list itself on Craig's List so Alice and Doug can meet up there.

Photo Credit: F-150 from ArtisticOperations (Pixabay License)


The Self-Sovereign Internet of Things

Coffee Beans

I've been contemplating a self-sovereign internet of things (SSIoT) for over a decade. An SSIoT is the only architecture which frees us from what I've called the CompuServe of Things. Unlike the CompuServe of Things, the SSIoT1 supports rich, peer-to-peer relationships between people, things, and their manufacturers.

In the CompuServe of Things, Alice's relationships with her things are intermediated by the company she bought them from as shown in Figure 1. Suppose, for example, she has a connected coffee grinder from Baratza.

Administrative relationships in today's IoT
Figure 1: Administrative relationships in today's CompuServe of Things (click to enlarge)

In this diagram, Alice uses Brataza's app on her mobile device to connect with Baratza's IoT cloud. She registers her coffee grinder, which only knows how to talk to Baratza's proprietary service API. Baratza intermediates all of Alice's interactions with her coffee grinder. If Baratza is offline, decides to stop supporting her grinder, goes out of business, or otherwise shuts down the service, Alice's coffee grinder becomes less useful and maybe stops working all together.

In an SSIoT, on the other hand, Alice has direct relationships with her things. In Operationalizing Digital Relationships, I showed a diagram where Alice has relationships with people and organizations. But I left out things because I hadn't yet provided a foundational discussion of DIDComm-enabled digital relationships that's necessary to really understand how SSI can transform IoT. Figure 2 is largely the same as the diagram in the post on operationalizing digital relationships with just a few changes: I've removed the ledger and key event logs to keep it from being too cluttered and I've added a thing: a Baratza coffee grinder2.

Alice has a relationship with her coffee grinder
Figure 2: Alice has a relationship with her coffee grinder (click to enlarge)

In this diagram, the coffee grinder is a fully capable participant in Alice's relationship network. Alice has a DID-based relationship with the coffee grinder. She also has a relationship with the company who makes it, Baratza, as does the coffee grinder. Those last two are optional, but useful—and, importantly, fully under Alice's control.

DID Relationships for IoT

Let's focus on Alice, her coffee grinder, and Baratza to better understand the contrast between the CompuServe of Things and an SSIoT.

Alice's relationships with her coffee grinder and it's manufacturer
Figure 3: Alice's relationships with her coffee grinder and it's manufacturer (click to enlarge)

In Figure 3, rather than being intermediated by the coffee grinder's manufacturer, Alice has a direct, DID-based relationship with the coffee grinder. Both Alice and the coffee grinder have agents and wallets. Alice also has a DID-based relationship with Baratza which runs an enterprise agent. Alice is now the intermediary, interacting with her coffee grinder and Baratza as she see's fit.

Figure 3 also shows a DID-based relationship between the coffee grinder and Baratza. In an administrative CompuServe of Things, we might be concerned with the privacy of Alice's data. But in a Self-Sovereign Internet of Things, Alice controls the policies on that relationship and thus what is shared. She might, for example, authorize the coffee grinder to share diagnostic information when she needs service. She could also issue a credential to Baratza to allow them to service the grinder remotely, then revoke it when they're done.

The following sections describe three of many possible use cases for the Self-Sovereign Internet of Things.

Updating Firmware

One of the problems with the CompuServe of Things is securely updating device firmware. There are many different ways to approach secure firmware updates in the CompuServe of things—each manufacturer does it slightly differently. The SSIoT provides a standard way to know the firmware update is from the manufacturer and not a hacker.

Figure 4: Updating the firmware in Alice's coffee grinder
Figure 4: Updating the firmware in Alice's coffee grinder (click to enlarge)

As shown in Figure 4, Baratza has written a public DID to the ledger. They can use that public DID to sign firmware updates. Baratza embedded their public DID in the coffee grinder when it was manufactured. The coffee grinder can resolve the DID to look up Baratza's current public key on the ledger and validate the signature. This ensures that the firmware package is from Baratza. And DIDs allow Baratza to rotate their keys as needed without invalidating the DIDs stored in the devices.

Of course, we could also solve this problem with digital certificates. So, this is really just table stakes. The advantage of using SSIoT for secure firmware updates instead of digital certificates is that if Baratza is using it for other things (see below), they get this for free without also having to support the certificate code in their products or pay for certificates.

Proving Ownership

Alice can prove she owns a particular model of coffee grinder using a verifiable credential.

Alice uses a credential to prove she owns the coffee grinder
Figure 5: Alice uses a credential to prove she owns the coffee grinder (click to enlarge)

Figure 5 shows how this could work. The coffee grinder's agent is running the Introduction protocol and has introduced Alice to Baratza. This allows her to form a relationship with Baratza that is more trustworthy because it came on an introduction from something she trusts.

Furthermore, Alice has received a credential from her coffee grinder stating that she is the owner. This is kind of like imprinting. While it may not be secure enough for some use cases, for things like a coffee grinder, it's probably secure enough. Once Alice has this credential, she can use it to prove she's the owner. The most obvious place would be at Baratza itself to receive support, rewards, or other benefits. But other places might be interested in seeing it as well: "Prove you own a Baratza coffee grinder and get $1 off your bag of beans."

Real Customer Service

We've all been in customer hell where we call a company, get put on hold, get ask a bunch of questions to validate who we are, have to recite serial numbers or model numbers to one agent, then another, and then lose the call and have to start all over again. Or been trapped in a seemingly endless IVR phone loop trying to even get to a human.

The DID-based relationship Alice has created with Baratza does away with that because DIDComm messaging creates a batphone-like experience wherein each participant knows they are communicating with the right party without the need for further authentication, reducing effort and increasing security. As a result, Alice has a trustworthy communication channel with Baratza that both parties can use to authenticate the other. Furthermore, as we saw in the last section, Alice can prove she's a bona fide customer.

But the ability of DIDComm messaging to support higher-level application protocols means that the experience can be much richer. Here's s simple example.

Figure 6: Alice uses a specialized wallet to manage the vendors of things she owns
Figure 6: Alice uses a specialized wallet to manage the vendors of things she owns (click to enlarge)

In Figure 6, Alice has two coffee grinders. Let's further assume that Alice has a specialized wallet to interact with her things. Doc Searls has suggested we call it a "briefer" because it's more capable than a wallet. Alice's briefer does all the things her credential wallet can do, but also has a user interface for managing all the things she owns and the relationships she has with them3. Students in my lab at BYU have been working on a prototype of such an interface we call Manifold using agent-enabled digital twins called "picos".

Having two things manufactured by Baratza presents a problem when Alice wants to contact them because now she is the intermediary between the thing and its vendor. But if we flip that and let the thing be the intermediary, the problem is easily resolved. Now when Alice wants to contact Baratza, she clicks one button in her briefer and lets her coffee grinder intermediate the transaction. The grinder can interject relevant information into the conversation so Alice doesn't have to. Doc does a great job of describing why the "thing as conduit" model is so powerful in Market intelligence that flows both ways.

You'll recall from DIDComm and the Self-Sovereign Internet, that behind every wallet is one or more agents. Alice's briefer has an agent. And it has relationships with each of her things. Each of those has one or more agents. These agents are running an application protocol for vendor message routing. The protocol is using sub protocols that allow the grinder to act on Alice's behalf in customer support scenarios. You can imagine that CRM tools would be fitted out to understand these protocols as well.

There's at least one company working on this idea right now, HearRo. Vic Cooper, the CEO of HearRo recently told me:

Most communications happen in the context of a process. [Customers] have a vector that involves changing some state from A to B. "My thing is broken and I need it fixed." "I lost my thing and need to replace it." "I want a new thing and would like to pay for it but my card was declined." This is the story of the customer service call. To deliver the lowest effort interaction, we need to know this story. We need to know why they are calling. To add the story to our context we need to do two things: capture the intent and manage the state over time. SSI has one more super power that we can take advantage of to handle the why part of our interaction. We can use SSI to operationalize the relationships.

Operationalized relationships provide persistence and context. When we include the product itself in the conversation, we can build customer service applications that are low effort because the trustworthy connection can include not only the who, but also the what to provide a more complete story. We saw this in the example with two coffee grinders. Knowing automatically which grinder Alice needs service for is a simple bit of context, but one that reduces effort nonetheless.

Going further, the interaction itself can be a persistent object with it's own identity, and DID-based connections to the participants4. Now the customer and the company can bring tools to bear on the interaction. Others could be invited to join the interaction as necessary and the interaction itself now becomes a persistent nexus that evolves as the conversation does. I recently had a month long customer service interaction involving a few dozen calls with Schwab (don't ask). Most of the effort for me and them was reestablishing context over and over again. No CRM tool can provide that because it's entirely one-sided. Giving customers tools to operationalize customer relationships solves this problem.

A Self-Sovereign Internet of Things

The Sovrin Foundation has an IoT working group that recently released a whitepaper on Self-Sovereign Identity and IoT. In it you'll find a discussion of some problems with IoT and where SSI can help. The paper also has a section on the business value of SSI in IoT. The paper is primarily focused on how decentralized identifiers and verifiable credentials can support IoT. The last use case I offer above goes beyond those, primarily identity-centric, use cases by employing DIDComm messaging to ease the burden of getting support for a product.

In my next blog post, I'll extend that idea to discuss how SSI agents that understand DIDComm messages can support relationships and interactions not easily supported in the CompuServe of Things as well as play a bigger role in vendor relationship management. These and other scenarios can rescue us from an administrative, bureaucratic CompuServe of Things and create a generative IoT ecosystem that is truly internet-like.


Notes

  1. Some argue that since things can't be sovereign (see Appendix B of the Sovrin Glossary for a taxonomy of entities), they shouldn't be part of SSI. I take the selfish view that as a sovereign actor in the identity metasystem, I want my things to be part of that same ecosystem. Saying things are covered under the SSI umbrella doesn't imply they're sovereign, but merely says they are subject to the same overarching governance framework and use the same underlying protocols. In the next post on this subject, I'll make the case that even if things aren't sovereign, they should have an independent existence and identity from their owners and manufacturers.
  2. The choice of a coffee grinder is based simply on the fact that it was the example Doc Searls gave when we were having a discussion about this topic recently.
  3. This idea has been on my mind for a while. This post from 2013, Facebook for My Stuff discusses it in the "social" venacular of the day.
  4. The idea that a customer service interaction might itself be a participant in the SSIoT may cause some to shake their heads. But once we create a true internet of things, it's not just material, connected things that will be on it. The interaction object could have its own digital wallet, store credentials, and allow all participants to continue to interact with it over time, maintaining context, providing workflow, and serving as a record that everyone involved can access.

Photo Credit: Coffee Beans from JoseAlbaFotos (Pixabay license)


DIDComm and the Self-Sovereign Internet

Mesh

DID-based relationships are the foundation of self-sovereign identity (SSI). The exchange of DIDs to form a connection with another party gives both parties a relationship that is self-certifying and mutually authenticated. Further, the connection forms a secure messaging channel called DID Communication or DIDComm. DIDComm messaging is more important than most understand, providing a secure, interoperable, and flexible general messaging overlay for the entire internet.

Most people familiar with SSI equate DIDComm with verifiable credential exchange, but it's much more than that. Credential exchange is just one of an infinite variety of protocols that can ride on top of the general messaging protocol that DIDComm provides. Comparing DIDComm to the venerable TCP/IP protocol suite does not go too far. Just as numerous application protocols ride on top of TCP/IP, so too can various application protocols take advantage of DIDComm's secure messaging overlay network. The result is more than a secure messaging overlay for the internet, it is the foundation for a self-sovereign internet with all that that implies.

DID Communications Protocol

DIDComm messages are exchanged between software agents that act on behalf of the people or organizations that control them. I often use the term "wallet" to denote both the wallet and agent, but in this post we should distinguish between them. Agents are rule-executing software systems that exchange DIDComm messages. Wallets store DIDs, credentials, personally identifying information, cryptographic keys, and much more. Agents use wallets for some of the things they do, but not all agents need a wallet to function.

For example, imagine Alice and Bob wish to play a game of TicTacToe using game software that employs DIDComm. Alice's agent and Bob's agent will exchange a series of messages. Alice and Bob may be using a game UI and be unaware of the details but the agents are preparing plaintext JSON messages1 for each move using a TicTacToe protocol that describes the format and appropriateness of a given message based on the current state of the game.

Alice and Bob play TicTacToe over DIDComm messaging
Alice and Bob play TicTacToe over DIDComm messaging (click to enlarge)

When Alice places an X in a square in the game interface, her agent looks up Bob's DID Document. She received this when she and Bob exchanged DIDs and it's kept up to date by Bob whenever he rotates the keys underlying the DID2. Alice's agent gets two key pieces of information from the DID Document: the endpoint where messages can be sent to Bob and the public key Bob's agent is using for the Alice:Bob relationship.

Alice's agent uses Bob's public key to encrypt the JSON message to ensure only Bob's agent can read it and adds authentication using the private key Alice uses in the Alice:Bob relationship. Alice's agent arranges to deliver the message to Bob's agent through whatever means are necessary given his choice of endpoint. DIDComm messages are often routed through other agents under Alice and Bob's control.

Once Bob's agent receives the message, it authenticates that it came from Alice and decrypts it. For a game of TicTacToe, it would ensure the message complies with the TicTacToe protocol given the current state of play. If it complies, the agent would present Alice's move to Bob through the game UI and await his response so that the process could continue. But different protocols could behave differently. For example, not all protocols need to take turns like the TicTacToe protocol does.

DIDComm Properties

The DIDComm Protocol is designed to be

  1. Secure
  2. Private
  3. Interoperable
  4. Transport-agnostic
  5. Extensible

Secure and private follow from the protocol's support for heterachical (peer-to-peer) connections and decentralized design along with its use of end-to-end encryption.

As an interoperable protocol, DIDComm is not dependent on a specific operating system, programming language, vendor, network, hardware platform, or ledger3. While DIDComm was originally developed within the Hyperledger Aries project, it aims to be the common language of any secure, private, self-sovereign interaction on, or off, the internet.

In addition to being interoperable, DIDComm should be able to make use of any transport mechanism including HTTP(S) 1.x and 2.0, WebSockets, IRC, Bluetooth, NFC, Signal, email, push notifications to mobile devices, Ham radio, multicast, snail mail, and more.

DIDComm is an asynchronous, simplex messaging protocol that is designed for extensibility by allowing for protocols to be run on top of it. By using asynchronous, simplex messaging as the lowest common denominator, almost any other interaction pattern can be built on top of DIDComm. Application-layer protocols running on top of DIDComm allow extensibility in a way that also supports interoperability.

DIDComm Protocols

Protocols describe the rules for a set of interactions, specifying the kinds of interactions that can happen without being overly prescriptive about their nature or content. Protocols formalize workflows for specific interactions like ordering food at a restaurant, playing a game, or applying for college. DIDComm and its application protocols are one of the cornerstones of the SSI metasystem, giving rise to a protocological culture within the metasystem that is open, agentic, inclusive, flexible, modular, and universal.

While we have come to think of SSI agents being strictly about exchanging peer DIDs to create a connection, request and issue a credential, or prove things using credentials, these are merely specific protocols defined to run over the DIDComm messaging protocol. Many others are possible. The follow specifications describe the protocols for these three core applications of DIDComm:

There's a protocol for agents to discover the protocols that another agent supports. And another for one agent to make an introduction4 of one agent to another. The TicTacToe game Alice and Bob played above is enabled by a protocol for TicTacToe. Bruce Conrad who works on picos with me implemented the TicTacToe protocol for picos, which are DIDComm.

Daniel Hardman has provided a comprehensive tutorial on defining protocols on DIDComm. We can imagine a host of DIDComm protocols for all kinds of specialized interactions that people might want to undertake online including the following:

  • Delegating
  • Commenting
  • Notifying
  • Buying and selling
  • Negotiating
  • Enacting and enforcing contracts
  • Putting things in escrow (and taking them out again)
  • Transferring ownership
  • Scheduling
  • Auditing
  • Reporting errors

As you can see from this partial list, DIDComm is not just a secure, private way to connect and exchange credentials. Rather DIDComm is a foundation protocol that provides a secure and private overlay to the internet for carrying out almost any online workflow. Consequently, agents are more than the name "wallet" would imply, although that's a convenient shorthand for the common uses of DIDComm today.

A Self-Sovereign Internet

Because of the self-sovereign nature of agents and the flexibility and interoperable characteristics they gain from DIDComm, they form the basis for new, more empowering internet. While self-sovereign identity is the current focus of DIDComm, its capabilities exceed what many think of as "identity." When you combine the vast landscape of potential verifiable credentials with DIDComm's ability to create custom message-based workflows to support very specific interactions, it's easy to imagine that the DIDComm protocol and the heterarchical network of agents it enables will have an impact as large as the web, perhaps the internet itself.


Notes

  1. DIDComm messages do not strictly have to be formatted as JSON.
  2. Alice's agent can verify that it has the right DID Document and the most recent key by requesting a copy of Bob's key event log (called delta's for peer DIDs) and validating it. This is the basis for saying peer DIDs are self certifying.
  3. I'm using "ledger" as a generic term for any algorithmically controlled distributed consensus-based datastore including public blockchains, private blockchains, distributed file systems, and others.
  4. Fuse with Two Owners shows an introduction protocol for picos I used in building Fuse in 2014 before picos used DIDComm. I'd like to revisit this as a way of building introductions into picos using a DIDComm protocol.

Photo Credit: Mesh from Adam R (Pixabay)


Operationalizing Digital Relationships

Using a wallet

Recently, I've been making the case for self-sovereign identity and why it is the correct architecture for online identity systems.

  • In Relationships and Identity, I discuss why identity systems are really built to manage relationships rather than identities. I also show how the architecture of the identity system affects the three important properties relationships must have to function effectively: integrity, lifespan, and utility.
  • In The Architecture of Identity Systems I introduced a classification scheme that broadly categorized identity systems into one of three broad architectures: administrative, algorithmic, or autonomic.
  • In Authentic Digital Relationships, I discuss how the architecture of an identity system affects the authenticity of the relationships it manages.

This post focuses how people can operationalize the relationships they are party to and become full fledged participants online. Last week, Doc Searls posted What SSI Needs where he recalls how the graphical web browser was the catalyst for making the web real. Free (in both the beer and freedom senses) and substitutable, browsers provided the spark that gave rise to the generative qualities that ignited an explosion of innovation. Doc goes on to posit that the SSI equivalent of the web browser is what we have called a "wallet" since it holds, analogously to your physical wallet, credentials.

The SSI wallet Doc is discussing is the tool people use to operationalize their digital relationships. I created the following picture to help illustrate how the wallet fulfills that purpose.

Relationships and Interactions in Sovrin Network
Relationships and Interactions in Sovrin Network (click to enlarge)

This figure shows the relationships and interactions in SSI networks enabled by the Hyperledger Indy and Aries protocols. The most complete example of these protocols in production is the Sovrin Network.

In the figure, Alice has an SSI wallet1. She uses the wallet to manage her relationships with Bob and Carol as well as a host of organizations. Bob and Carol also have wallets. They have a relationship with each other and Carol has a relationship with Bravo Corp, just as Alice does2. These relationships are enabled by autonomic identifiers in the form of peer DIDs (blue arrows). The SSI wallet each participant uses provides a consistent user experience, like the browser did for the Web. People using wallets don't see the DIDs (identifiers) but rather the connections they have to other people, organizations, and things.

These autonomic relationships are self-certifying meaning they don't rely on any third party for their trust basis. They are also mutually authenticating: each of the parties in the relationship can authenticate the other. Further, these relationships create a secure communications channel using the DIDComm protocol. Because of the built-in mutual authentication, DIDComm messaging creates a batphone-like experience wherein each participant knows they are communicating with the right party without the need for further authentication. As a result, Alice has trustworthy communication channels with everyone with whom she has a peer DID relationship.

Alice, as mentioned, also has a relationship with various organizations. One of them, Attester Org, has issued a verifiable credential to Alice. They issued the credential (green arrows) using the Aries credential exchange protocol that runs on top of the DIDComm-based communication channel enabled by the peer DID relationship Alice has with Attester. The credential they issue is founded on the credential definition and public DID (an algorithmic identifier) that Attester Org wrote to the ledger.

When Alice later needs to prove something (e.g. her address) to Certiphi Corp, she presents the proof over the DIDComm protocol, again enabled by the peer DID relationship she has with Certiphi Corp. Certiphi is able to validate the fidelity of the credential by reading the credential definition from the ledger, retrieving Attester Org's public DID from the credential definition, and resolving it to get Attester Org's public key to check the credential's signature. At the same time, Certiphi can use cryptography to know that the credential is being presented by the person it was issued to and that it hasn't been revoked.

This diagram has elements of each architectural style described The Architecture of Identity Systems.

  • Alice has relationships with five different entities: her friends Bob and Carol as well as three different organizations. These relationships are based on autonomic identifiers in the form of peer DIDs. All of the organizations use enterprise wallets to manage autonomic relationships4.
  • As a credential issuer, Attester Org has an algorithmic identifier in the form of a public DID that has been recorded on the ledger. The use of algorithmic identifiers on a ledger3 allows public discovery of the credential definition and the public DID by Certiphi Corp when it validates the credential. The use of a ledger for this purpose is not optional unless we give up the loose coupling it provides. Loose coupling provides scalability, flexibility, and isolation. Isolation is critical to the privacy protections that verifiable credential exchange via Aries promises.
  • Each company will keep track of attributes and other properties they need for the relationship to provide the needed utility. These are administrative systems since they are administered by the organization for their own purpose and their root of trust is a database managed by the organization. The difference between these administrative systems and those common in online identity today is that only the organization is depending on them. People have their own autonomic root of trust in the key event log that supports their wallet.

Alice's SSI wallet allows her to create, manage, and utilize secure, trustworthy communications channels with anyone online without reliance on any third party. Alice's wallet is also the place where specific, protocol-enabled interactions like credential exchange happen. The wallet is flexible tool that Alice uses to manage her digital life.

We have plenty of online relationships today, but they are not operational because we are prevented from acting by their anemic natures. Our helplessness is the result of the power imbalance that is inherent in bureaucratic relationships. The solution to the anemic relationships created by administrative identity systems is to provide people with the tools they need to operationalize their self-sovereign authority and act as peers with others online. When we dine at a restaurant or shop at a store in the physical world, we do not do so within some administrative system. Rather, as embodied agents, we operationalize our relationships, whether they be long-lived or nascent, by acting for ourselves. The SSI wallet is the platform upon which people can stand and become digitally embodied to operationalize their digital life as full-fledged participants in the digital realm.


Notes

  1. Alice's SSI wallet is like other wallets she has on her phone with several important differences. First, it is enabled by open protocols and second, it is entirely under her control. I'm using the term "wallet" fairly loosely here to denote not only the wallet but also the agent necessary for the interactions in an SSI ecosystem. For purposes of this post, delineating them isn't important. In particular, Alice may not be aware of the agent, but she will know about her wallet and see it as the tool she uses.
  2. Note that Bob doesn't have a relationship with any of the organizations shown. Each participant has the set of relationships they choose to have to meet their differing circumstances and needs.
  3. I'm using "ledger" as a generic term for any algorithmically controlled distributed consensus-based datastore including public blockchains, private blockchains, distributed file systems, and others.
  4. Enterprise wallets speak the same protocols as the wallets people use, but are adapted to the increased scale an enterprise would likely need and are designed to be integrated with the enterprise's other administrative systems.

Photo Credit: Red leather wallet on white paper from Pikrepo (CC0)