Silo-Busting MyWord Editor is Now Public

unhosted_web_architecture

I've written before about Dave Winer's nodeStorage project and his MyWord blogging tool. Yesterday Dave released the MyWord editor for creating blog posts.

I can see you yawning. You're thinking "Another blogging tool? Spare me! What's all the excitement?!?"

The excitement over a few simple ideas:

  • First, MyWord is a silo-buster. Dave's not launching a company or trying to suck you onto his platform so he can sell ads. Rather, he's happy to have you take his software and run it yourself. (Yes, there are other blogging platforms you can self-host, the monster-of-them-all Wordpress included. Read on.)
  • Second, the architecture of MyWord is based on Dave's open-source nodeStorage system. Dave's philosophy for nodeStorage is simple and matches my own ideas about user's owning and controlling their own data, instead of having that data stored in some company's database to serve its ambitions. I've called this the Personal Cloud Application Architecture (PCAA).

A PCAA separates the application data from the application. This has significant implications for how Web applications are built and used.

I set up an instance of nodeStorage for myself at nodestorage.byu.edu. Now when I use the MyWord editor (regardless of where it's hosted) I can configure it to use my storage node and the data is stored under my control. This is significant because I'm using Dave's application and my storage. I'm not hosting the application (although I can do that, if I like, since it's open source). I'm simply hosting data. Here's my first post using the MyWord editor with my nodeStorage.

Making this work, obviously, requires that the storage system respond in certain ways so that the application knows what to expect. The nodeStorage system provides that. But not just for MyWord, for any application that needs identity (provided through Twitter) and storage (provided by Amazon S3). Dave's provided several of these applications and I'm sure more are in the works.

If more people have access to nodeStorage-based systems, application developers could ignore the features it provides and focus on the app. I recognize that's a big "if", but I think it's a goal worth working toward.


Sessions I Want to Hold at IIW

IIW XX T-shirt Logo

Internet Identity Workshop XX is coming up in a few weeks (register here). IIW is an unconference, so if you're coming, you might want to start thinking about the sessions you want to hold. There's always room for more topics and the topics you bring are what makes IIW interesting.

I'm thinking about sessions on the following topics:

  1. The Future of Picos and Fuse—there are a lot of Fuse backers who come to IIW, so it's always a good place talk about what's happening with Fuse (and hopefully recruit some help to work on the open source project). There's a boat load of interesting developments happening below the surface that I hope to share. Whether you're a Fuse backer or you're just interested in an Internet of Things that doesn't depend on CompuServe 2.0 (aka Web 2.0), you'll get something out of this session.
  2. Bureaucracy—This might seems like a weird topic for IIW, but I think it's relevant in some very interesting ways. What I'd really like is for some people coming to IIW to read David Graeber's The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureaucracy (at least Chapter 1) before coming so we can use it as the basis for the discussion. Graeber's position is we now live in what he calls the "age of total bureaucratization." If you take that as a starting proposition, the question of what this means for the coming Internet of Things can be both fascinating and terrifying. Read the book and come prepared to discuss it!

By the way, the dog logo has long been a fixture at IIW. The one here will be on the 20th anniversary commemorative T-Shirt that you can add to your order when you register.


IBM's ADEPT Project: Rebooting the Internet of Things

IBM Think D100 Test

I recently spent some time learning about IBM's ADEPT project. ADEPT is a proof of concept for a completely decentralized Internet of Things. ADEPT is based on Telehash for peer-to-peer messaging, BitTorrent for decentralized file sharing, and the blockchain (via Ethereum) for smart contracts (this video from Primavera De Filippi on Ethereum is a good discussion of that concept).

The ideas and motivations behind the project as presented at IBM's Device Democracy align nicely many of the concerns I have raised about the Internet of Things. To get a feel for that, watch this video from Paul Brody, vice president and global electronics industry leader for IBM Global Business Services. Brody, speaking at the Smart Home session of the IFA+ Summit, says “I come not to praise the smart home, but to bury it.” Its worth watching the whole thing:

Note: the video doesn't show Brody's slides. I couldn't find these exact slides, but this presentation to Facebook looks like it's close if you want to see some of the visuals.

The project has a couple of white papers:

  • Device democracy- Saving the future of the Internet of Things (PDF) is a business-level discussion of why the Internet of Things is already broken and needs a reboot.
  • ADEPT: An IoT Practitioner Perspective (PDF) is a more technical look at the protocols they chose and how they come together to create a completely decentralized Internet of Things. The paper describes their proof of concept based on Telehash, Ethereum, and BitTorrent. It’s worth reading to understand the way they’re thinking about trust, privacy, and device-to-device (D2D) and device-to-vendor (D2V) interactions.

Brody says current IoT is broken, and won't scale because of

  • Broken business models
  • High cost
  • Lack of privacy
  • Not future-proof
  • Lack of functional value

One of the key ideas they discuss is autonomous coordination. This is critical in a world where any given person might have thousands of connected devices they interact with. we simply won't be able to coordinate it all ourselves (part of the reason the current IoT needs a reboot). For example, they use an example I've used of electrical devices coordinating their use of the home's power to avoid a surcharge from the electric company. That's a hard problem that doesn't easily admit centralized solutions.

The ADEPT concept imagines each device being connected directly to the Internet and consequently they spend some time dealing questions like "what if my device is too slow or doesn't have enough memory to use the blockchain?" One of the reasons I'm a fan of creating virtual proxies of physical devices via persistent compute objects (picos) is that they can provide processing and storage that a simple device might not be able to provide because it's too slow, too small, intermitently online and so on.

The more important reason for using virtual proxies on the Internet of Things is to provide representation for things that aren't physical things. People, places, organizations, concepts, and so on all need to interact with things. Picos provide an architecture for accomplishing that. Picos provide a foundation for the primary activities we need in a decentralized IoT:

  1. Distributed transaction processing and applications
  2. Peer-to-peer messaging and sharing
  3. Autonomous coordination and contracts between peers

And they do this for everything whether it has a processor or not.

The conclusion of the Digital Democracy white paper says of winners and losers in the IoT economy:

Winners will:

  • Enable decentralized peer-to-peer systems that allow for very low cost, privacy and long term sustainability in exchange for less direct control of data
  • Prepare for highly efficient, real-time digital marketplaces built on physical assets and services with new measures of credit and risk
  • Design for meaningful user experiences, rather than try to build large ecosystems or complex network solutions.

Losers will:

  • Continue to invest in and support high-cost infrastructure, and be unmindful of security and privacy that can lead to decades of balance sheet overhead
  • Fight for control of ecosystems and data, even when they have no measure of what its value will be
  • Attempt to build ecosystems but lose sight of the value created, probably slowing adoption and limiting the usage of their solutions.

One of the things I really like about the IBM vision is that they do a good job of tying all of this to business value. Speaking of the effect the Internet has had on the market for digital content they say "The IoT will enable a similar set of transformations, making the physical world as liquid, personalized and efficient as the digital one." They use the idea of "liquifying the physical world" to bring this home and discuss why this enables things like the following:

  • Finding, using, and paying for physical assets the same way we do digital content today
  • Matching supply and demand for physical good in real-time
  • Digitally manage risk and assess credit
  • Allow unsupervised use of systems and devices, reducing transaction and marketing costs
  • Digitally integrate value chains in real-time to instantly crowdsource and collaborate

This is a bold vision that aligns well with Doc Searls' thoughts expressed in The Intention Economy: When Customers Take Charge. This kind of business value is what will drive the IoT, not things like "turn on the lights when I get home." I think that's what Paul Brody meant when he said "I come not to praise the smart home, but to bury it." The smart home isn't where the business value will be and a centralized, proprietary, and closed vision for creating it is bound to fail.

I'm working on a white paper that lays out a similar reference architecture for the Internet of Things, so I find this project fascinating. More to come...


MyWord!

WORDS

I simply love MyWord.io from Dave Winer. This is such a simple, beautiful idea. Like all such, it seems obvious once you've seen it.

MyWord.io is JavaScript for rendering a blog page (or any page, for that matter) from a JSON description of the contents in the style that Medium pioneered.

To understand it click to the example JSON file of an article on Anti-Vaxxers and then use MyWord.io to render the contents of the JSON file. MyWord.io also supports Markdown.

The magic here is that there's no server running a Web app in the style of Web 2.0. Neither is there an API. The JSON file is on Dropbox and could be hosted anywhere. The "application" is all JavaScript running in the browser. The JavaScript could also be hosted anywhere too since Dave has shared the source code on Github.

This is an example of what I've been calling a person cloud application architecture (PCAA). The key idea is to separate the application from the data and allow the data to be hosted anywhere the owner chooses. The advantage is that there's no central server intermediating the interaction.

Dave's on a roll here. I wrote about his nodeStorage project a few week ago. I'm heartened that developers like Dave are building applications that support people being in control of their own data rather than having to surrender it to the database of some company and be forced to interact with it through the company's administrative identity.


Ambient Computing

A real Internet of Things will be immersive and pervasive.

Imagine a connected glass at your favorite restaurant. The glass might report what and how much you drank to your doctor (or the police), make a record for the bill or even charge directly for each glass, send usage statistics to its manufacturer, tweet when you toast your guest, tell the waitstaff when it’s empty or spilled, coordinate with the menu to highlight good pairings, or present to your Google Glasses as a stein or elegant goblet depending on what’s in it. Now imagine that the plates, silverware, tablecloth, table, chair, and room are doing the same.

In their book Trillions, Lucas, Ballay, and McManus present a vision for a near-future world where nearly everything is connected together. About this network, they say:

We have literally permeated our world with computation. But more significant than mere numbers is the fact we are quickly figuring out how to make those processors communicate with each other, and with us. We are about to be faced, not with a trillion isolated devices, but with a trillion-node network: a network whose scale and complexity will dwarf that of today’s Internet. And, unlike the Internet, this will be a network not of computation that we use, but of computation that we live in.

Ambient computing, as this is called, is as difficult for us to imagine as it is for us to imagine living underwater. To us, water is something that exists in cups, tubs, and pools. We notice it and use it or avoid it as necessary. But to a fish, water is ambient. They cannot avoid it. Whether it is crystal pure or horribly polluted, they live in it.

Untitled

Derek the goldfish


This change, from computing as a thing we do to something that we exist within, will have vast impact on our lives. Like the fish in water, we will be immersed in a sea of computation. Our actions and our words will have impact beyond the current sphere.

Ambient computing will be inescapable. There will be no living outside of the computation. Every thing you do today will be intermediated by computation of some kind. A visit to the grocery store won't be possible with interaction with the smart packaging. Getting there won't be possible without smart vehicles that talk to smart roads and smart intersections. Preparing the food you buy will involve a smart power grid and connected appliances, pots, and pans. Even eliminating the waste will involve trash cans and toilets that are connected to the network.

Do we want to build this? That's the wrong question. Connecting everything is inevitable. Our choice is how we want things to be connected and who controls the devices, data, and processing.


nodeStorage and the Personal Cloud Application Architecture

Dave Winer just released software called nodeStorage along with a sample application called MacWrite. Dave's been working on these ideas for a long time and it's fun to watch it all coming together.

Dave's stated goal is support for browser-based applications, something near and dear to my heart. nodeStorage provides three important things that every app needs: In Dave's words:

nodeStorage builds on three technologies: Node.js for the runtime, Twitter for identity and Amazon S3 for storage.

This makes it easy to build applications by handling three big things that developers would otherwise have to worry about.

This idea is similar to my Personal Cloud Application Architecture (PCAA). The biggest difference is that PCAA isn't just solving the backend problem for developers, but proposing that the right way to do it is by using the application user's backend. Not only don't developers have to build the backend, they don't have to run it either! And the user gets to keep their data in their own space. Traditional apps do this:

standard_web_architecture

A PCAA app separates the app from the backend like so:

unhosted_web_architecture

nodeStorage does this too. The only question is who runs the application data cloud. As far as I can see, there's nothing in Dave's proposal that would keep nodeStorage to be used with something like a Freedom Box, Johannes Ernst's UBOS linux distro, or any other indieweb project to so that users can run their own backend for apps.

Dave's solving developer pain and is taking an important step down the path toward solving some user pain. Sounds like a strategy for adoption.


Re-imagining Decentralized and Distributed

I teach a course at BYU every year called "Large Scale Distributed Systems." As I discuss distributed systems with the class, there is always a bit of a terminology issue I have. It has to do with how we think of distributed systems vs. decentralized systems. You often see this diagram floating around the net:

centralised-decentralised-distributed

This always feels like an attempt is to place the ideas of centralized, decentralized, and distributed computing on some kind of continuum.

In his PhD dissertation, Extending the REpresentational State Transfer (REST) Architectural Style for Decentralized Systems (PDF), Rohit Khare makes a distinction about decentralized systems that has always felt right to me. Rohit uses "decentralized" to distinguish systems that are under the control of different entities and thus can't be coordinated by fiat.

Plenty of systems are distributed that are still under the control of a single entity. Almost any large Web 2.0 service will be hosted from different data centers, for example. What distinguishes the Internet, SMTP, and other distributed systems is that they are also made to work across organizational boundaries. There's no centerpoint that controls everything.

Consequently, I propose a new way of thinking about this that gives up on the linearity of graphics like the one above and resorts to that most powerful of all analytic tools, the 2x2 matrix:

system_type_2x2

In this conceptualization, we classify systems along two axes:

  • Whether the components are co-located or distributed. This could be either physical or logical depending on the context and level of abstractions.
  • Whether the components are under the control of a single entity or multiple entities. A central control point could be logical or abstract so long as it is able to effectively coordinate nodes in the system.

We could envision a third axis on the model that also classifies systems as to whether they are hierarchical or heterarchical like so:

3 axes

If you're having trouble with the distinction, note that DNS is a decentralized, hierarchical system where as Facebook's OpenGraph is a centralized, heterarchical system.

I like this model and so, for now, I'm sticking with it and starting to think of and describe systems in this way. I've gotten some mental leverage out of it. I'd love to know what you think.


Rethinking Ruleset Registration in KRL

Updated January 21, 2014, 11:15am to add additional unresolved issues.

URL

Since it's inception, KRL was meant to be a language of the Internet. This was something of an experiment. Firstly, as an Internet language, all processing is in the cloud. That is, it's PaaS-only model; you can't run it from the command line. Secondly, programs would be identified by URL.

This has, for the most part, worked pretty well. But as I move to a model where multiple KRL rule engines (KREs) are running in Docker instances around the Internet, there's one early design decision that has caused some problems: ruleset registration.

URLs are long, so we created a registry where a ruleset identifier, or RID, could be mapped to the URL. This meant that KRL programs could refer to rulesets by a relatively short ID instead of a long URL. So, you'll see KRL code that looks like this:

ruleset example {
  meta {
    name "My Example Ruleset"

    use module a16x8 alias math
    use module b57x15
  }

  rule flip {
    select when echo hello
    pre {
      x = math:greatCicleDistance(56);
      y = b57x15:another_function("hello")
    }
    send_directive("hello world");
    always {
      raise notification event status for a16x69
        with dist = x
    }
  }
}

Note that we're using two modules identified by RID, a16x8 and b57x15 respectively. In the first case we gave it an alias to make the code easier to read. In the explicit event raise that happens in the rule's postlude, we raise the event for a specific ruleset by ID, a16x69 in this case. This doesn't happen often, but it's an optimization that KRL allows. When the rule engine runs across a RID, it looks it up in the registry and loads the code at the associated URL (if it's not cached).

The problem with a fixed registry is that each instance of KRE is running it's own registry. No problems there unless we want them to all be able to run the same program, say Fuse. The Fuse rulesets refer to each other by RID. That means that they need to have the same RID on every instance of KRE. An ugly synchronization problem.

Another solution would be to create a global registry, but that's just another piece of infrastructure to run that will go down and cause reliability problems. If KRL is a language of the Internet, then it ought not be subject to single points of failure.

I've determined the real solution is to go back to the root idea and simply use URLs, with in-ruleset aliases, as the ruleset identifier. So the proceeding code might become this:

ruleset example {
  meta {
    name "My Example Ruleset"

    use module https://s3.amazonaws.com/my_rulesets/math.krl alias math
    use module https://example.com/rulesets/transcode.krl alias transcode
    use rid notify for https://windley.com/rulesets/notification.krl
  }

  rule flip {
    select when echo hello
    pre {
      x = math:greatCicleDistance(56);
      y = transcode:another_function("hello")
    }
    send_directive("hello world");
    always {
      raise notification event status for notify
        with dist = x
    }
  }
}

Note that in the case of modules, we've simply replaced the RID with a URL and used the existing alias mechanism to provide a convenient handle. In the case of the event being raised to a specific ruleset, we don't necessarily want to load it as a module (and incur whatever overhead that might create), so I've introduced a new pragma in the meta block to declare aliased for rids. The syntax for that isn't set in stone, this is just a proposal.

The advantage to this method is that now rulesets can live anywhere without explicit registration. And multiple instances of KRE can run the program without a central registry. The ruleset serves as a soft registry that can be changed by the programmer as needed without keeping some static structure up to date. Note: none of this changes the current security requirements for rulesets to be installed in a pico before they are run there.

There are a few problems that I've yet to work out.

  1. This method works fine for rulesets that are publicly available at a URL. But some rulesets have developer keys and secrets. And some programmers don't want to make their ruleset public for other reasons (e.g. trade secrets). With a registry, we solved this problem by supporting BASIC AUTH URLs. Since the registry hid the URL, the password wasn't exposed. That obviously won't work here.

  2. The Sky Cloud API model relies on the RID. We obviously can't substitute a URL in the URL scheme for Sky Cloud and have it be very easy to use. One solution would be to use the ruleset name (the string immediately after the keyword ruleset in the ruleset definition) for this purpose. The system could dynamically register the name with the URL for a specific pico when the ruleset is installed in that pico. The user wouldn't be able to install two rulesets with the same name. This could be a potential problem since there's no way to enforce any global uniqueness on ruleset names.

  3. When rulesets are flushed from the cache in a given instance, the current method is to put a semicolon separated list of RIDs in the flush URL. This would have to change to support a collection of URLs in the body of a post.

These are the issues I've thought of so far. I'll continue to update this as I give it more thought. I welcome your comments and especially any suggestions you have to improving this proposal.


Fuse, Kynetx, and Carvoyant

Fuse, the open-source connected-car platform I'm working on is stack of technologies that ultimately provide the total user experience. Here's one way to look at that stack:

Fuse technology stack

From bottom to top, the components of the stack are:

  1. The device, a CalAmp LMU-3030, is plugged into the vehicle and has a cellular data connection. The diagram leaves out the telephone company, but they're involved as well. The device uses data on the OBD-II port along with data from its built-in GPS to create a stream information about the vehicle that is sent to Carvoyant.
  2. Carvoyant uses a telematics server that is designed to interact with the LMU device to receive and process the data stream from the device in the vehicle. Carvoyant processes that data stream and makes it available as an API.
  3. Kynetx hosts a rules engine called KRE. KRE is a container for online persistent objects that we call "picos." Each vehicle has a pico that processes its interactions and stores data on its behalf.
  4. The Fuse API is created by the software running in the vehicle's pico.
  5. Applications (like the Fuse app) use the Fuse API to provide a user experience.

Note that the mobile app is just one of many applications that might make use of the Fuse API. For example, as shown in this diagram, not only does the mobile app use the API, but so does the Fuse Management Console and the iCal feed.

fuse model

Picos are modeling device that have significant advantages for connected things:

  • Picos can be used to model people, places, things, concepts, and so on. In Fuse, we have one for each vehicle, one representing the owner, and one representing the owner's fleet.
  • Picos are related to other picos to create useful systems. For example, in Fuse, the owner, fleet, and vehicle picos are, by default, related as shown in the following diagram.

    fuse microservice overall
  • Pico relationships are flexible. For example, a Fuse fleet can have two owners, an owner could allow a "borrower" relationship with someone borrowing the vehicle, and vehicles could have relationships with their manufacturers or service agents.
  • A vehicle pico can be moved from one fleet to another simply by changing the relationships.
  • Picos store the data for the entity they model. There's no big Fuse database with all the vehicle data in it. Each vehicle pico is responsible for keeping track of it's own persistent data.
  • As a result of the pico-based persistent data store, personal data is more readily kept private.
  • Further, the pico-based persistent data store allows data about the vehicle (e.g. its maintenance records) to be kept with the vehicle when it has a new owner.
  • Even though all the Fuse picos are currently being hosted on the Kynetx-run instance of KRE, they could be hosted anywhere. Even vehicles in the same fleet could be hosted in different KRE containers if need be. I'm working on a Docker-based KRE install that will make this easier for people who want to self-host.
  • Each pico is an independent processing object and run programs independent of other picos, even those of the same type. This means that a given vehicle pico might, for example, run an augmented API or a different set of rules for managing trips.
  • Picos have a built-in event bus that allows for multiple rules to easily interact with events from the vehicle. We've put that to great use in creating Fuse by leveraging what can be seen as a microservices architecture.

The Fuse API differs from the Carvoyant API in several significant ways:

  • Fuse is fleet-based, meaning that Fuse provides fleet roll-up data not available from the Carvoyant API.
  • The Fuse API includes APIs for fuel and maintenance in addition to those for trips. These interact with data from Carvoyant, but aren't available in the Carvoyant API. For example, Fuse enriches trip data from Carvoyant with trip cost data based on fuel purchases.
  • Fuse uses Carvoyant and they've been a great partner. But my vision for Fuse is that it ought to allow vehicle data from a variety of devices. I'd love to let people use Automatic devices for example, with Fuse. If you're interested in helping, let me know.

The link to Carvoyant in the Fuse Management Console (OAuth) has provided some angst for people do to the need to create a Fuse (Kynetx) account and then to also create and link-in a Carvoyant account. Indeed this has been the source of 90% of the support issues I deal with. In theory, it's no different than linking you Pocket account to Facebook and Twitter so that you can share things you read with Pocket. In practice it's hard for people to understand with for a few reasons:

  • In the Pocket example I cite, people already have a relationship with Twitter and Facebook.
  • Not only do they already have a relationship, but they understand what Twitter and Facebook are and why they want them.
  • Twitter and Facebook are used in more apps that Pocket, so Pocket is riding a wave of user understanding.
  • Pocket is linking to more than one thing and the fan out helps by providing multiple examples.

If Fuse supported more than just Carvoyant devices and you linked in multiple device accounts and if people used Carvoyant with more than one app, this might be clearer. But that's not reality right now, so we live with the model even though it seems somewhat forced.

The same is true of the Fuse (Kynetx) account. For simplicity, I refer to it as a Fuse account and the branding on the account interaction is Fuse, but if you pay attention, you're actually going to Kynetx to create the account. That's because you're really creating a hosting account for your picos on the Kynetx instance of KRE. Fuse itself really has no notion of an account. The Kynetx account is used to associate you with the owner pico that belongs to you, but that's all. Other mechanisms could be used to do that as well. You could run applications other than Fuse in that Kynetx account (and I do).

You're probably saying "this is more complicated than it has to be." And that's true if your goal is just to create a connected-car app like Automatic. My goal has always been a little larger than that: using Fuse as a means to explore how a larger, more owner-controlled Internet of Things experience could be supported. All this, or something similar, is necessary to create an owner-controlled Internet of Things experience.


The Core of Your API

cored apples One of the topics that came into relief for me quite clearly recently is the idea of core domains and their application in API design. This happened as part of our design meetings for BYU's University API. When I say "core domain" I'm thinking of the concepts taught in Domain-Driven Design (DDD) and made clear in Implementing Domain-Driven Design (iDDD). (Aside: if you're in OIT and would like a copy of iDDD, stop by my office.)

DDD uses the terminology "core domain," "supporting domain," and "generic domain" to describe three types of sortware systems you might be building or using and how your organization should relate to each. My goal here isn't to expound on DDD; that's a different article. But I think you get the idea of what a core domain is: the "core domain is so critical and fundamental to the business that it gives you a competitive advantage and is a foundational concept behind the business."

Suppose you're an online merchant, for example. The core domain is probably the order processing system and orders are the fundamental artifact you worry about. Inventory is important, but it's a supporting domain. Customers are important too, but they're also supporting. The thing you worry about day in and day out is the order. The object that links items in the inventory, a customer, and a payment transaction.

Consequently, if you were designing an API for an online merchant, you'd probably make orders a top-level object in the API:

/orders

This would for the heart of everything you designed.

Applying this logic to a University API is harder. Universities tend to be pretty complicated places with lots of constituents. For example, it we were to just ask "what business is a university in?" The answer, at the core, is that universities are in the credentialing business. We certify that students have performed at required levels in prescribed sets of classes. Looked at this way, an enrollment object (marked as "complete") might be at the heart of a university API. But it turns out that almost no university systems care about enrollments as such, at least not the same way an ecommerce company cares about orders.

Universities care about students, courses, programs, classes, instructors, and classrooms. These are the key objects that fuel much of the university IT systems. Enrollments are in there, of course. You can ask what students are in a class and what courses a student is in or has completed. But you're always starting from the class or the student, not the enrollment itself.

Which of these is a core domain and which are supporting depends on your context. There's another key concept from DDD: "bounded contexts." The API needs to support each of these core objects, but how the API behaves with respect to a given object type depends on the context you're in. If I'm looking at a student from the context of tuition payments, I care about very different things, than if they've just stopped by the counseling center.

The University API will support different contexts. Trying to support these very different contexts from a single model in unwieldy at best and likely impossible. But that doesn't mean that the University API can't supply a consistent experience regardless of the context. The University API should feel like a well-designed system. This is accomplished through well-known principles of API design including consistency in naming, identifiers, use of plurals, error messages, headers, return values, and HTTP method semantics. Our goal is that developers who've used the API in one context and learned its idioms will be able to easily transfer that experience to another and that using the API in that new context will feel natural and intuitive.