Evaluating KRL Declarations

I can't tell you how much I rely on browser consoles to test and debug JavaScript. Programming KRL has been made more difficult by the fact that there's no command line. KRL is evaluated online in the context of a pico. You couldn't just run a few lines to see test a complex set of operations or figure out a dumb syntax error.

I just built a simple KRL declaration evaluator that you can use to run KRL declarations and see the results. Here's a screenshot:

KRL Declaration Evaluator

The sad part is this only took about three hours because all the pieces were just lying around in the code that makes picos work. I just had to hook them up. This is going to save me a lot of time. And I'm sure my students will enjoy it. Much nicer than adding logging statements, uploading the code to the pico, evaluating, and then looking at the logs. Why didn't I do this year ago?

I'm sure there are weird expression result I might not have accounted for. And the evaluator can't help with expressions that contain references to persistent variables, event attributes, and other expressions that only make sense in the context of a pico.

Right now, the editor is using a JavaScript syntax highlighter, which is close, but not perfect. For example, the image shows some errors (red x's on the left margin) that are not really KRL errors. That's fine KRL. If you're interested in writing a KRL syntax highlighter for Ace, I'd be much obliged. For now, I've left syntax highlighting on, but turned syntax checking off. No worries, the results will show you syntax errors if you submit something with bad syntax.

KRL Declaration evaluator with error

I made this work with the help of the Ace Editor, Bootstrap, and jQuery.

Decentralization Is Hard, Maybe Too Hard


In a Linux Journal piece entitled Giving Silos Their Due, Doc Searls laments that decentralized services, with a few notable exceptions, haven't become the preferred way of engineering new technologies. He says:

In those days, many of us had full confidence that Jabber/XMPP would do for instant messaging (aka chat) what SMTP/POP3/IMAP did for e-mail and HTTP/HTML and its successors did for publishing and all the other things one can do on the World Wide Web. We would have a nice flat, distributed and universal standard that people could employ any way they wanted, including on their own personal hardware and software, with countless interoperable systems and no natural barriers to moving data easily from any one system to any other.

Didn't happen.

And in fact, Jabber/XMPP isn't the only place this didn't happen, as Doc goes on to point out. In fact, after Web 1.0, decentralized protocol-based systems have never become the preferred way to do something significant on the Internet.

I remember telling Doc a while back that I'm often afraid that the Internet is an aberration. That is a gigantic accident brought on by special circumstances. That accident showed us that large-scale, decentralized systems can be built, but those circumstances are not normal.

Jon Udell was visiting this week and we spoke in similar tones about blogging and how it turned from a vibrant, two-way conversation to a place for electronic magazines and think-pieces. Early blogging felt like a community. I met many of the people I now consider good friends through blogging. Now, blogging is just a way to market, even if all you're marketing is ideas. People understand posting on Facebook or Medium cause it's simple, fast, and gets immediate attention.

Similarly, Jon's elmcity project, meant to demonstrate the network effects that emerge in physical communities from a decentralized system for calendar events, couldn't get traction because, as I understand it, people understand a Facebook Event page or sending a tweet more easily than they do publishing their calendar. Even when people got calendars, try getting them to understand why putting a PDF document online isn't good enough.

I get that decentralized thinking is hard. Even harder is getting a decentralized ecosystem off the ground. The Internet was a fun little playground before 1994. Of course the first nodes on the Internet were put in place in 1969. When I was in grad school in the 80's there were so few public nodes in the Internet, we could FTP an entire list from Berkeley anytime we set up a new machine. That's a long incubation period for something we now consider a critical infrastructure of the modern world. Man-made, decentralized things are difficult to pull off.

So, yeah, I'm a dinosaur. Like Doc, I'll "never believe silos are the best way to make the world work in the long run. And I'll always believe that the flat distributed world built on free and open stuff is the most supportive and fertile base on which to build the best and broadest range of goods and services." I'm pretty confident that the return of Online Service 2.0, what I call the The CompuServe of Things, will ultimately leave people flat. I hope that the IoT ultimately creates the right circumstances for a resurgence of decentralized thinking.

Until then, I'm working on decentralized systems to promote learning, a place where I can, at least for a small part of the Internet, affect both sides of the interaction.

February 2016 CTO Breakfast

: Utah CTO Breakfast at Tower Deli - Thanksgiving Point

The Utah CTO Breakfast is held monthly. The breakfast is an informal, no-host event. Come prepared to discuss you favorite technologies, development tool and practices, or whatever.

Categories: #ctob, #uttech

The Cloud Is Not the Internet

At some point in the distant past, a professor in a networking class at some unknown university drew a diagram that looked like this to represent two machines talking to each other over the Internet:

cloud diagrams Internet

The cloud in the diagram indicated that the Internet was a shapeless, almost immaterial transport medium. In the words of Craig Burton, the Internet was a Giant Zero, a "hollow sphere: a giant three-dimensional zero." Whether it's a cloud or a giant hollow sphere (harder to draw), the Internet is a place where every node is functionally the same distance away from every other node. On the Internet's hollow sphere, we all live on the surface, pulled by it's unique gravity to the center.

Sometime in the last decade, someone else drew this picture of a cloud:

cloud diagrams service provider

The cloud in this diagram may look the same, but it's quite different from the one in the previous diagram. The cloud in the second picture isn't a hollow sphere, a giant zero. Rather, it's a destination. A place where servers live. It's immaterial in the sense that you can't touch the servers delivering you service. But it's a place, not a path.

The distinction is important. In the first diagram, the computers are peers and the cloud connects them. In the second, the computer is a client to some invisible server hiding in the Cloud.

This isn't to say the Cloud is bad. Lots of goodness there that I use every day. But we shouldn't let our love affair with the Cloud make us loose sight of what makes the Internet special.

The Internet is open. By and large, the Cloud is not. Openness means that the Internet is interoperable, based on standards, and governed by agreements and processes that are more transparent than those of a private company.

Anyone can play on the Internet. In the Cloud you are in someone else's domain and have agreed to their obfuscated terms and conditions. While you have to pay for transport and an address on the Internet, those are commodities that are available from a wide variety of providers.

On the Internet, you are a peer. In the Cloud, you're a client. On the Internet you are zero distance away from every other node on the Internet.

The Internet is a vast, vibrant, diverse ecosystem. The Cloud is one, relatively tiny part of the Internet, a monoculture of protocols and business models.

The Internet is possibly the most amazing feat of decentralized infrastructure humankind has ever accomplished. The Cloud is a nice model for building useful businesses.

So, next time you see a cloud diagram, remember that you can draw it in two ways, and those two models have very different properties.

Soverign-Source Identity, Autonomy, and Learning


One of the stated aims of a BYU education is lifelong learning.

... a BYU diploma is a beginning, not an end, pointing the way to a habit of constant learning. In an era of rapid changes in technology and information, the knowledge and skills learned this year may require renewal the next. Therefore, a BYU degree should educate students in how to learn, teach them that there is much still to learn, and implant in them a love of learning "by study and also by faith" (D&C 88:118).

I don't think BYU is unique in this desire. A university education is designed to imbue graduates with expertise in a specific area of study. But that's never meant that they know all they'll ever need to know. A university education is more than just putting students through a few courses and then testing that they've absorbed the right facts. Students should become members of the intellectual discipline they study, able to carry on learning long after the leave the university.

Helping students become learners is harder than just teaching them. Active learners are responsible for their education. They are autonomous agents who not only participate in learning activities, but select things to learn, track their progress, and regulate and prioritize activities.

Sovereign-Source Identity

One of the cornerstones of BYU's plan to help students learn to learn is making them responsible for vital components of their online identity--a concept called sovereign-source identity.

Most students come to the university with multiple digital identities including email and social media. The University adds to this by giving them another (or three). These identities put the student in the administrative domains of the issuing party and are given for the issuing parties' purposes and on their terms.

A sovereign-source identity, in contrast, is one created and maintained by a person for their own purposes. On today's Internet, these usually take the form of a domain name1. By registering and using a domain name, a person can create an online identity that they control regardless of the administrative whims and business plans of someone else.

In support of this, we're using Domain of One's Own (DoOO) to offer a domain name and associated hosting to every BYU student along with faculty and staff. We have about 1500 people signed up to date, and it continues to grow.

Data Ownership and Personal APIs

Personal autonomy depends on a person's control of their data and applications. To see why consider the following picture:

traditional LMS

Traditionally we think of a student completing learning activities with an LMS. All the data about the student's activities gets stored in the LMS. And student data remains with the institution, if it's kept at all, when the student leaves.

Now consider this picture with all the same components, but rearranged:

personal API and the LMS

In this model, the student completes learning activities and stores the results in a data store they control. The student submits them to the LMS via a personal API. The LMS pushes data like grades, comments, due dates, and so on back via the student's personal API.

Not shown in this picture is that this model supports the student using multiple LMS's owned and managed by different organizations both during and beyond their matriculation. The student could use an LMS at school, one at work, switch to a different LMS when they transfer schools, and so on. The student is the center of their learning. The student is in control.

Hosting is an important component of the DoOO program—not just because a domain is more interesting when it points to something—but because it provides a place to host personal data and an accompanying API. Because it's hosted, it's not inside some company's administrative domain and subject to their terms and conditions. Instead, it's in a space that is under control of the domain owner. Hosting implies that the data and associated services can be moved independent of the underlying hosting provider.

Limits to Control

Recently, Marguerite McNeal wrote BYU’s Bold Plan to Give Students Control of Their Data in edSurge. Judging from the comments about an article Jim Groom wrote that reference's Marguerite's article, Domains as Ground Zero for the Struggle over Agency, there's a lot of misunderstanding about what it means to give students control of their identity and personal data. Here's a couple:

...you never make a compelling case for giving students control over their data. I don’t even know what that means. Does that mean they can change their grades? I sure as hell hope not! Does that mean they have to grant each instructor permission to add a grade to their record? That would be annoying for the student and the instructor.
To assume (in this case) students have sovereignty over claims regarding their educational experience is ill-founded. Students aren’t the “supreme” owner of their grades. Universities are trusted to determine how grades, GPA’s, semesters, credit hours etc. are calculated.
I would like to echo the two previous comments and add that this is bordering on ridiculous. We do not and should not own every piece of information about ourselves.

One of the mistakes people make when discussing control of personal data is to assume that any data about someone is should automatically be included in the data they control. We should just as soon believe a person should control every photograph in which they appear. That is ridiculous. Thankfully, no one is suggesting that.

Grades, for example, might be about a student, but they're not the student's data. Consequently, the definitive copy would always be under the control fo the institution that awarded them. Lots of institutions will have data about people. People will continue to be part of the administrative regimes of those institutions.

While the grade and the transcript belong to the institution, the student might want a copy in their personal data store. Even a certified copy. This isn't hard to do technically and could be accomplished in several ways. The harder part of this is establishing the standard so that an employer, for example, could validate a credential at multiple schools without integrating with each of them.


Helping students create an online identity, independent of the various administrative spheres to which they belong, and giving them control of their data in meaningful ways leads to students being responsible for their learning and lets them act autonomously.

The Domain of One's Own program aims to improve student literacy about what it means to be a sovereign, online citizen, independent of the administrative regimes with which she interacts. Personal APIs build on that by giving students the means to control their data and act on that independence. We believe that these are foundational to enabling life-long learning.

  1. There are problems with a domain name as the basis for an online identity since they are rented rather than owned. Projects like WebDHT and XDI are aimed at creating single-use identifiers people can claim forever.

Aspen Grove Winter Workshop

La Vattay

The Aspen Grove Winter Workshop (AGWW) will be held February 17 and 18, 2016 at BYU's Aspen Grove Conference Center. The workshop is hosted by BYU's Office of the CIO. You can get tickets on Eventbrite. The workshop is open to anyone.

AGWW is an unconference that is focused on University APIs, Domain of One's Own, Personal Learning Systems, Learning Management Systems, Student Information Systems, and other strategic uses of IT in the university environment. This isn't just a technical conference. There will be plenty of discussions about the impact of these technologies on learning and the modern university.

We held the first edition of this workshop last June. Jim Groom did a good job summarizing the extraordinary two days. He says:

The conference was relaxed, intimate, and intense all at once—is that possible? There were only 40 people, but we had 33 sessions over two days. We got to know each other fairly well, and spent a lot of time thinking and talking about APIs, but in a low-key environment. Major props to Heidi Nobantu Saul for framing and facilitating the unconference approach beautifully. It reminded me of the 2007 Northern Voice Moose Camp, just with fewer people. A lot of serendipitous discussions, in-betweeness, and conversational sessions.

We loved how the workshop turned out last June, so we decided to do it again. There was a lot of energy and many, many great sessions.

As an unconference, AGWW has no assigned speakers or panels, so it's about getting stuff done. If you come and there weren't any sessions that interested you, that's your own fault since you can pretty much call any session you like. We will have a trained open space facilitator at the workshop to run the show and make sure everything goes smoothly.

We'll be up in the mountains above Sundance Ski Resort, so come for the workshop and stay for the skiing! We've reserved a block of rooms at Sundance and will be providing a shuttle between Sundance and Aspen Grove. There's also plenty of hotel space down in the valley.

The workshop should be a lot of fun. I hope you can make it! Sign up now!

Promises and Communities of Things


The current Internet of Things (IoT) is a shadow of what it could be. I've named it The CompuServe of Things. As an antidote to the anemic IoT model we use now, I've proposed social things cooperating with each other in trustworthy spaces I call "communities of things." I've written about the importance of culture in creating these communities.

The things in these communities are more properly thought of as spimes, than networked devices. They might be networked devices, but they needn't be. If you've followed my work, you'll recognize many concepts from SquareTag and Fuse at work here. I've got a group of students building out spimes and communities on top of our pico platform.


One of the questions I've been wrestling with is how spimes work out agreements within a community. Serendipitously I ran across Mark Burgess's work in promise theory. Mark has laid out some very important principles for how we can design systems that cooperate. And cooperation is the foundation of an IoT that isn't anemic.

I can't hope to describe promise theory in a single blog post, but I do want to introduce some of the ideas. In promise theory, a promise is a declaration of intent that recipients can use to understand better the promiser's behavior. A promise is different from an imposition, a restriction placed on one actor by another. Promises recognize that actors are autonomous and cannot be coerced or forced into a certain behavior. Promise theory is a model of voluntary cooperation. That's a pretty good description of what we want from communities of things.

An imposition attempts to force the recipient to behave in a specific way. The system imposing obligations must understand the capabilities of the recipient. Obligations separate intent from implementation, creating uncertainty. An actor receiving obligations from multiple sources has no way to resolve conflicts.

In contrast, actors that make promises are declaring their intent with respect to some behavior. The actor knows it's context with and uses it to make promises. Promises can come into conflict, but the actor who issued them has the context to resolve them1.

Promises require that the promisee evaluate whether the promiser will keep the promise or not. This put the responsibility to plan for failure on the promisee. Some promises are more likely to be kept than others. Promises bring uncertainty to the foreground. This idea fits nicely with our intention to use reputation in creating trustworthy spaces.

Promises and Picos

Promises are a great complement to our pico-based model of programming. We can think of each pico as a collection of promises in the form of the rules that are run in response to events. "When I see event A, I promise to do X, Y, and Z."

We can visualize pico A having a channel to pico B as a promise by pico B to accept events from pico A. We can restrict channels so that the promise is more specific. Pico B promises to accept events of types X and Y from pico A. Note that the point-to-point nature of event channels in pico-to-pico communication (rather than using a global event bus) helps preserve pico autonomy by letting picos determine what events they'll accept from which picos.

Promises and Communities

One of my examples of a community is the problem of an electric car negotiating charing times with other appliances. But that gets pretty complicated. So let's start with something simpler. Fuse has the notion of a fleet, which is essentially a community pico for the vehicles (spimes) in the fleet. The fleet community pico makes the following general promises:

  • I promise to evaluate request to join the community
  • I promise to identify members of the community to each other
  • I promise to aggregate member information for other interested parties
  • I promise to only release member and aggregate information to certain parties
  • I promise to route certain events to the owners' picos

These seem like a good set of rules for any community. Specific types of communities might have additional promises implemented as rules. For example, I could imagine communities that jointly share a resource. These promises might include:

  • I promise to maintain authorization credentials for an API all the members use
  • I promise to accurately report resource usage to all members of the community and the owner
  • I promise to notify members of resource availability status
  • I promise to allocate resources fairly (or in accordance with some other scheme)

For their part members of a shared-resource community make certain promises as well:

  • I promise to accurately identify its needs to the community
  • I promise to use resources in accordance with allocation

Being able to identity the promises every community makes and the promises that specialized types of communities make is an important step to writing the code that makes a community function and structures the code as functionality that can be layered.

One of the challenges of building promise-based systems is behavior emerges from the promises and the interaction between actors rather than being read linearly as in an imperative program. Consequently, developers of pico-based systems need good monitoring to provide them with a "god's-eye view" of the interactions between picos and the overall state of the system. Otherwise, they can be difficult to debug.


I was excited when I ran across Mark's ideas on promises because it seemed to mesh nicely with what we're trying to do at the Pico Lab. They have provided a nice mental model for thinking about the interactions of autonomous agents. For example, when thinking through the promises that a shared-resource community might make, it's tempting to think of a promise like:

  • promise to not let community members go over their allocation

But this either implies that the community pico controls the resource and can thus cut a member off or it is an impossible promise since the community pico can't control the members. You have to think carefully through the pico's capabilities to understand the promise it can make.

I'm looking forward to delving deeper into this idea as we start implementing the code for community picos.

  1. You might recognize from this description a parallel between promises and event-based systems in that they both favor semantic encapsulation of actor capabilities.

Reactive Programming with Picos

Updated to include discussions about identity and KRL.

pico labs stacked logo

The Reactive Manifesto describes a type of system architecture that has four characteristics (quoting from the manifesto):

  • Responsive: The system responds promptly if at all possible.
  • Resilient: The system stays responsive in the face of failure.
  • Elastic: The system stays responsive under varying workload.
  • Message Driven: Reactive Systems rely on asynchronous message-passing to establish a boundary between components that ensures loose coupling, isolation, location transparency, and provides the means to delegate errors as messages.

reactive system property stack

These are often represented as a stack since the only explicit architectural choice is to be message driven. The others are emergent properties of that and other architectural choices.

The goal of the reactive manifesto is to promote architectures and systems that are more flexible, loosely-coupled, and scalable while making them amenable to change. The manifesto doesn't specify a development methodology so reactive systems can be built using a wide variety of systems, frameworks, and languages.

Persistent Compute Objects

Persistent compute objects (picos), are a good choice for building reactive systems—especially in the Internet of Things.

Picos implement the actor model of distributed computation. Actors extend message-driven programming with two additional required properties. In response to a received message,

  1. actors send messages to other actors
  2. actors create other actors
  3. actors implement a state machine that can affect the behavior of the next message received

This is also a good, high-level description of the properties that picos have. Picos respond to events and queries by running rules. Depending on the rules installed, a pico may raise events for itself or other picos. Picos can create and delete other picos. Each pico has a persistent data store that can only be affected by rules that run in response to events. I describe picos and their API and programming model in more detail elsewhere. Event-driven systems, like those built from picos, are the basis for systems that meet the reactive manifesto.

Picos support direct asynchronous messaging by sending events to other picos. Also, picos implement an event bus internally. Events sent to the pico are placed on the internal event bus. Rules in the pico are selected to run based on declarative event expressions. The pico matches events on the bus with event scenarios declared in the event expressions. Event expressions can specify simple single event matches, or complicated sets of events with temporal ordering.

Picos share nothing with other picos except through messages exchanged between them. Picos don't know and can't directly access the internal state of another pico.

As a result of their design, picos exhibit the following important properties:

  • Lock-free concurrency—picos respond to messages without locks.
  • Isolation—State changes in one pico cannot affect the state in other picos.
  • Location transparency—picos can live on multiple hosts and so computation can be scaled easily and across network boundaries.
  • Loose coupling—picos are only dependent on one another to the extent of their design.

Channels and Messaging

Pico-to-pico communicate happens over event channels. Picos have many event channels. An event channel is point-to-point and delivers events directly to the pico. Picos can only interact by raising events and making requests on a specific event channel.

Picos can only get an event channel to another pico in one of four ways:

  1. Parenthood—when a pico creates another pico, it is given the only event channel to that new child pico.
  2. Childhood—when the parent creates a child, the child receives an event channel to its parent1.
  3. Endowment—as part of the initialization, a parent pico can give channels in its possession the child.
  4. Introduction—one pico can introduce a second pico to a third. In addition, the OAuth flow supported by picos returns a channel to the pico.

Children only have a channel to their parent, unless (a) the parent gives them other channels during initialization, (b) the parent introduces them to another pico, or (c) they are capable of creating children. Consequently, a pico is completely isolated from any interaction that its parent (supervisor) doesn't provide. This creates a security model similar to the object capability model.


A pico-based application is a collection of picos operating together to achieve some purpose. A single pico is almost never very interesting.

Picos live inside a host called KRE2. KRE is the engine that makes picos work. A given instance of KRE can host many picos. And there can be any number of KRE instances. Picos need not exist in the same instance of KRE to interact with each other.

When an account is created in KRE, a root pico is also created. That pico is special in that it is directly associated with the account and cannot be deleted without deleting the account. Only the root pico can be introduced to another application via OAuth. The root pico can create children, and those picos can also create children.

For example, in Fuse, the connected-car application we built using picos, each Fuse owner ends up with a collection of picos that look like this:

fuse microservice overall

The Fuse application isn't a single pico, but a collection. Fuse arranges picos in a particular configuration and endows them with specific functionality to create the overall experience. The owner pico is the root pico in the account. When it's created and initialized, it created a fleet pico. The fleet pico creates vehicle picos as necessary as people manipulate the mobile application to add a new vehicle.

Picos belong to a parent-child hierarchy with every pico except for the root pico having a parent. Further, each child pico in the hierarchy belongs to the same account as the root pico. Picos can be moved from one account to another although KRE does not yet support this.

The parent-child relationship is important because the parent installs the initial rules in the child, imbuing it with functionality and then initializes the pico to give it relationships to other picos and set its initial state. Since the parent is installing rules of its choosing, initialization consists of installing the initialization rule and then sending the child pico an event that causes the initialization rule to fire.

But, the collection of picos that make up an application need not communicate hierarchically or even within the picos of a single account. The channels that link picos create the relationships that allow the application to work

For example, consider the following diagram, again from Fuse:

Fuse with multiple owners

In this diagram, the fleet pico has two owners. That is, there are two picos that have owner rights for the fleet. The two owners are in different accounts and could be on different KRE hosts. The behavior of the application depends on the pico relationships represented by the arrangement of channels, channel attributes, and the rules installed in each pico. Because there's no fixed schema or configuration of picos, the overall application is very dynamic and flexible.

Each pico presents a potentially unique API based on the rulesets it contains. Mobile and web-based applications communicate with the pico and use its API using a model called the pico application architecture (PAA).

Internet First

Picos were developed to be Internet-centric:

  • Picos are always online. Picos don't crash, and they only go away when explicitly deleted. Picos can be programmatically deleted or when the account they are in is deleted.
  • Rulesets are loaded by URL. There is a registry that associates a short-name (called the ruleset ID) with the URL, but KRE only caches the ruleset. The definitive source is always the ruleset specified by the URL.
  • Channels are URLs. Events are raised using the URL. This is primarily done over HTTP, although a SMTP gateway exists and KRE could support other transport mechanisms such as MQTT.

Being first-class Internet citizens sets picos apart from other actor-based programming tools used to build resilient systems. The Internet-centric design supports Internet of Things applications and means that pico-based systems inherently support the important properties that set the Internet apart including decentralized operation, heterarchical structure, interoperability of components, and substitutability of servers.


Scattered throughout the preceding discussion are numerous references to identity issues, including authentication and authorization. Picos free developers from worrying about most identity issues.

Picos have a unique identity that is the basis of their isolation. Persistent data is independently stored for each pico without special effort by the developer. A channel is owned by only one pico. Rulesets are installed on a pico-by-pico basis.

Each root pico is associated with a single account. And descendents of the root pico are owned by the same account. The account handles authentication, including OAuth access, for any picos in the account. KRE implements the account identity layer.

Clients can use the OAuth flow to get access to a channel for the root pico. Access to another pico in the account is mediated by rulesets installed in the root pico. The client (i.e. relying party) can be a mobile application, Web application, or an online service. KRE acts as the authorization server, and the pico acts as the resource server. The resource is the pico's API.

pico and oauth

Giving out a channel is tantamount to giving permission to the holder to send events. Future versions of the system will support policies on channels that restrict interactions. But for many uses (e.g. the Fuse connected car application) such restrictions are unnecessary since the only systems with access to channels were all under the account owner's control.

Reactive Programming with Picos

Picos present an incredibly flexible programming model. In particular, since picos can programmatically create other picos, form relationships with other picos by creating and sharing channels, and change their behavior by installing and uninstalling rulesets, they can be very dynamic.

There are several important principles that you should remember when using picos to create a reactive system.

Think reactively. Picos are much more responsive and scalable when programmed with events. While picos can query other picos to get data snapshots, queries can lead to excessively synchronous operation. Because picos support lock-free asynchronous concurrency, they are more efficient when responding to events to accomplish a task. But this requires a new way of thinking for developers who have traditionally programmed in object-oriented and imperative languages. As an example, I've described in detail how I converted a report generation tool for Fuse from a synchronous request-response system to one based on the scatter-gather pattern. The reactive solution is more resilient because it's designed to accommodate failure and missed messages. Moreover, the reactive solution is more scalable because it doesn't block and thus isn't subject to timeouts:


Use picos to create the model. Picos represent individual entities since they have a unique identity and are persistent. They can have unique behavior and present their own API to other picos and applications. The overall structure of a system of picos should correspond to the relationships that entities have in the modeled system.

In Fuse, as we've seen, there are picos that represent the owner (a person), the fleet (a concept and collection), and each vehicle (a physical device). Picos were designed to build models for the Internet of Things, but they can be used for other models we well. For example, we used picos to model a guard tour system that included entities as diverse as locations and reports:

guard tour pico relationships

Think in interactions. Going along with the idea of modeling with picos, developers have to think in terms of the interactions that picos have with each other. While the rulesets installed in each pico defines its behavior, the behavior of the application derives from the interaction that the picos have with each other. Developers can use tools like pico maps that show those relationships and swim-lane diagrams that show the interactions that happen in response to events.

In addition to the interactions between picos, the rules in a single pico are responsive to events and often raise an event to the same pico, causing other rules to respond. For example, this diagram shows the interactions of a few rules in a vehicle pico in response to a single pds:profile_updated event:

fuse microservice

You can see from the diagram, that the single event sets off a chain of reactions, including sending events to other picos and calls to external APIs. You can read the details in Fuse as a Microservice Architecture.

Let the system do the scheduling and avoid blocking. When an event is raised in a pico, the pico event evaluation cycle picks rules to run based on their event expressions and schedules them.

event eval cycle

Developers can order rule evaluation inside a pico through the use of events, event expressions, and ad hoc locks using persistent storage. Between picos, controlling order is even harder. In general, avoid using these to sequence operations as much as possible and let the system do the scheduling.

Use idempotency. Failure is easier to handle when picos are not sensitive to repeated delivery of the same event since they are free to retry without having to determine if the previous event completely partially or was been delivered. Many operations are naturally idempotent. For those that aren't, the pico can often create guard rules that assure idempotency. Since one pico can't directly see another pico's state, the receiving pico must take the responsibility for idempotency. Idempotent Services and Guard Rules provides more detail, including an example:


Be resilient. Developers can't anticipate every situation, but picos can be modified over time to be resilient to common failure modes. Picos have a number of methods for handling errors including the ability to set default rulesets for error handling and the ability to select rules based on error conditions. In addition, because pico-based applications are incredibly dynamic, they can frequently be programmed to self-heal. For example, A Microservice for Healing Subscriptions describes how a simple rule can check for missing channels and recreate them:


Whither KRL?

If you've had a prior introduction to picos, you might be wondering about KRL, the rule language used to program picos. How is it that I managed to write a long post about reactive programming and picos without mentioning KRL?

Picos relate to KRL in a way analogous to the relationship between Unix and C. I can talk about Unix programming, processes, file systems, user IDs, scheduling, and so on without ever mentioning C. Similarly, I can talk about reactive programming and picos without explicitly mentioning KRL.

Does that mean KRL isn't important? On the contrary, there's no other way to program picos. But building a pico-based application isn't so much about KRL as it is the function, behavior, and arrangement of the picos themselves.

People often ask if we couldn't just get rid of KRL and use JavaScript or something else. The answer is a qualified "yes." The qualification is that to get pico functionality, you need to add event expressions, persistent data, a runtime environment, accounts and identifiers, channels, and the ability to manage the pico lifecycle dynamically, including the JavaScript installed in each one. By the time you're done, JavaScript is a very small part of the solution.

For most people, learning KRL isn't going to be the hard part. The hard part is thinking reactively. Once you're doing that, KRL makes implementing your reactive design fairly easy and you'll find that its features map nicely to the problems you're solving.


Picos are a dynamic, actor-based system that support building reactive systems. The Internet-centric nature of picos means that they are especially suited to building Internet of Things applications that are decentralized and independent. All of the components that support building systems with picos are open source. I have a group of students helping to build, test, and use pico-based systems and further develop the model. We invite your questions, feedback, and help.

I found inspiration for writing this article and seeing actors as an answer to building reactive systems from Vaughn Vernon's book Reactive Messaging Patterns with the Actor Model: Applications and Integration in Scala and Akka.

  1. Strictly speaking, it isn't necessary for the child to automatically get the parent's channel and it may be better to let the parent supply that only if necessary. This may change in future versions of KRE. Let me know if you have feedback on this.
  2. KRE used to be an acronym for the Kynetx Rules Engine. Now, it's just KRE.

Ambience and Personal Learning Systems


My work on the Internet of Things has led me to be a big believer in the power of ambience. The Internet of Things is going to lead to a computing experience that is immersive and pervasive. Rather than interacting with a computer or a phone, we'll live inside the computation. Or at least that's how it will feel. Computing will move from being something we do, like a task, to something we experience all around us. In other words, the computing environment will be ambient.

Remarkably, we've been moving the opposite direction with learning.

Since the advent of the Web, more and more learning activities have moved inside the computer. Everything from textbooks to lecture delivery to quizzes has moved onto the tiny screens that seem to dominate our lives. Consequently, when I speak to people about personal learning systems (PLS), the first questions often concern the UI. "What kind of dashboard will it have?" I even labeled one of the primary boxes in my personal learning system diagram the "dashboard."

As we started talking about the PLS and how it would work, I realized that a dashboard was the wrong way to think about it. One of the primary features of the personal learning system is an API that other systems can use. As a result, a lot of what a learner might do in a dashboard in a closed system will happen via the interactions that the learner has with other systems that then use the API.

Some of these interactions are obvious. For example, if an instructor schedules a quiz in Canvas (Instructure's LMS product), then Canvas ought to tell the student's PLS about the new quiz. And once the student completes the quiz, the quiz results and learning objectives (and even the quiz itself depending on instructor preferences) ought to get pushed out to the student's portfolio. The quiz is administered in Canvas since that's the LMS the instructor chose for that class. We shouldn't, indeed we can't, replicate every possible learning activity in the PLS. That's not its job.

We can easily imagine and LMS telling the PLS about a quiz and results via an API. But how does the student know the quiz was scheduled? And where is the student notified of results?

Your first thought might be the PLS dashboard, but I don't think most of us are looking for another place to check for things to do or get messages. The PLS ought to just use the systems the student already has.

The initial version of the PLS that we build will have a calendar server that is available via the PLS API. Regardless of how the LMS tells the PLS about an upcoming event, the student will see it in whatever calendar tool they already use because they'll be able to link the calendar tool on their phone or desktop to their personal API.

Similarly, notifications ought to come to the learner by whatever channel they choose. We have a separate NotifyMe project that we're building at BYU. Students give permission to senders and choose the channel for delivery of messages from a specific sender. Senders queue messages for delivery via and OAuth-moderated API. Students can revoke permission to send at any time. Channel types include email, SMS, dead-letter drop, and even things like Twitter and Facebook. We're planning to fold this capability into the PLS.

Browser extensions, mobile apps, etextbook readers, Slack integrations, the student's domain, and any other tool students use, could be considered part of the student's learning environment and thus something that should be talking to the PLS.

Thinking outside the box a little, I've asked the group that works on BYU's tech classrooms to consider how a classroom might be part of the LMS so that the classroom is configured based on the needs of the faculty member teaching the class. You could extend this idea to the PLS as well. Why shouldn't the classroom write to the class member's PLS? That way the classroom, already part of the student's learning environment, is also integrated with the PLS.

Even functions that probably are part of the PLS, like planning, don't necessarily just have a dashboard. There may be some default set of screens or an app where learners plan their personal syllabus. But because there's an API this function might be farmed out in other tools as well. For example, the student's learning plan may be influenced or even controlled (based on their goals, age, etc.) from systems used by counselors, HR departments, and other advisors.

A PLS shouldn't be one more place learners need to go, but rather something that hooks to everything else they do. Sure, there will be configuration screens, but since most learning happens ambiently, the PLS should respect that and be as unobtrusive as possible while still doing all it can to help people learn.

Tutorial Proposal: Using Domain Driven Design to Architect Microservices

I just posted this as a proposed tutorial at the O'Reilly Software Architecture conference next April in New York. I'm not holding my breath. My track record with O'Reilly conferences isn't great.


One of the hardest parts of creating a microservices architecture is knowing where the boundaries are. Whether we're designing a new system or refactoring a monolithe that has become a "big ball of mud," microservides depend on getting the boundaries and interfaces right. Without proper boundaries and a good understanding of the necessary interaction patterns, your brand-new microservice-based system will just make the "big ball of mud" more complicated.

Microservices and Domain-Driven Design (DDD) are made for each other--they are both about boundaries. Using principles from DDD, we can understand the right places to split up the monolithe. DDD concepts like bounded contexts and ubiquitous language are concrete tools, with a large body of work behind them, that help designers find and enforce boundaries. Moreover, DDD principles help the architect understand how microservices can be as loosely coupled as possible so that problems in one part of the system don't spill over into the rest.

Part I of this tutorial is an introduction to microservices and the challenges of architecting a system using them. We will explore important microservices concepts such as isolation, team autonomy, planning for failure, and lightweight communication.

Part II covers the key parts of so-called strategic DDD such as domains, bounded contexts, and ubiquitous languages. We will apply them to microservices and explore how DDD's bounded contexts help with the design of key microservice requirements.

Part III focuses on microservice interfaces and shows how DDD context maps can be used to design them. Context maps not only document the interfaces, but classify them and give architects a tool for exploring possible problems with interface decisions. Good interface design is a key to building resilient microservice-based systems. DDD provides concrete tools for baking resiliency into the design

As part of the tutorial, students will participate in two hands-on exercises where they apply the principles they are learning. These exercises provide tools they can take back to their teams and use to get started on their own microservice designs.

Learning Objectives

At the end of this tutorial, students will be able to:

  • describe the key requirements for successful microservice design
  • understand DDD principles and apply them in finding the boundaries in a microservice architecture
  • explain a context map and show how context maps communicate interface issues in microservices


Phillip J. Windley, Ph.D. has been a computer science professor at Brigham Young University for over 20 years where he teaches classes on distributed systems. Currently Phil is an Enterprise Architect in the Office of the CIO at BYU where he leads efforts across the university to apply domain-driven design (DDD) and microservice principles in architecting campus systems. Phil was the founder and CTO of iMall.com, an early ecommerce tools company and founder and CTO of Kynetx, an Internet of Things company that built the connected-car product Fuse.