University API and Domains Workshop

BYU Logo Big

BYU is hosting a face-to-face meeting for university people interested in APIs and related topics (see below) on June 3 and 4 in Salt Lake City. Register here.

We've been working on a University API at BYU for almost a year now. The idea is to put a consistent API on top of the 950-some-odd services that BYU has in its registry. The new API uses resources that be understandable to anyone familiar with how a university works.

As we got into this effort, we found other universities had similar initiatives. Troy Martin started some discussions with various people involved in these efforts. There's a lot of excitement about APIs right now at universities big and small. That's why we thought a workshop would be helpful and fun.

The University API & Domains (UAD) workshop covers topics on developing API’s, implementing DevOps practices, deploying Domain of One’s Own projects, improving the use of digital identity technologies, and framing digital fluency on University campuses, This workshop is focused on addressing current issues and best practices experienced in building out conceptual models and example real-life use cases. Attendees include IT architects, educational technologists, faculty, and software engineers from many universities.

Kin Lane, the API Evangelist, will be with us to get things kicked off on the first morning. After that, the agenda will be up to the attendees because UAD is an unconference. It has no assigned speakers or panels, so it's about getting stuff done. We will have a trained open space facilitator at the workshop to run the show and make sure we are properly organized.

Because UAD is an unconference, you’re invited to speak. If you have an idea for a session now or even get one in the middle of the conference, you’re welcome to propose and run a session. At the beginning of each day we will meet in opening circle and allow anyone who wants to run a session that day to make a proposal and pick a time. There is no voting or picking other than with your feet when you choose to go.

Whether you're working at a university or just interested in APIs and want to get together with a bunch of smart folk who are solving big, hairy API problems, you'll enjoy being at this workshop and we'd love to have you: Register here.

New in Fuse: Notifications

Guide to Fuse Replacement

I've recently released changes to the Fuse system that support alert notifications via email and text.

This update is in preparation for maintenance features that remind you of maintenance that needs to be done and uses these alerts to help you schedule maintenance items.

Even now, however, this update will be useful since you will receive emails or SMS messages when you car issues a diagnostic trouble code (DTC), low fuel alert, or low battery alert. I found I had a battery going bad recently because of several low battery alerts from my truck.

In addition, I've introduced new events for the device being connected or disconnected that will also be processed as alerts. Several people have reported their device stopped working when in fact what had happened was the device became unplugged. Now you will get an email when your device gets unplugged.

Also, I've updated the system so that vehicle data will be processed approximately once per minute while the vehicle is in motion. Previously Fuse only updated on "ignition on" and "ignition off". This means that when one of your vehicle is in motion and you check either the app or the Fuse management console (FMC) you'll see the approximate current position as last reported by the device.

These changes don't get installed in your account automatically. You can activate them by going to the Fuse management console (FMC), opening your profile, and saving it.

If your profile has a phone number, you will receive Fuse alerts via text. Otherwise they will come via email. More flexibility in this area is planned for future releases.

This update also reflects several important advances for picos and CloudOS that underlie Fuse. The new notification ruleset that is being installed is the progenitor to a new notification system for CloudOS and the updates are happening via a rudimentary "pico schema and initialization" system that I've developed and will be incorporated into CloudOS this summer as part of making it easier for developers to work with collections of picos.

What's New With KRL


In The End of Kynetx and a New Beginning, I described the shutdown of Kynetx and said that the code supporting KRL has been assigned to a thing I created called Pico Labs. Here's a little more color.

All of the code that Kynetx developed, including the KRL Rules Engine and the KRL language is open source (and has been for several years). My intention is to continue exploring how KRL and the computational objects that KRL runs in (called picos) can be used to build decentralized solutions to problems in the Internet of Things and related spaces.

I have some BYU students helping me work on all this. This past semester they built and released a KRL developer tools application. This application is almost complete in terms of features and will provide a firm foundation for future KRL work.

Our plan is to rewrite the CloudOS code and the JavaScript SDK that provides access to it over the summer. The current version of CloudOS is an accretion of stuff from various eras, has lots of inconsistencies, and is missing some significant features. I used CloudOS extensively as part of writing the Fuse API which is based on KRL and picos. I found lots of places we can improve it. I'm anxious to clean it up.

As a motivating example for the CloudOS rewrite, we'll redo some of the SquareTag functionality in the PCAA architectural style. SquareTag makes extensive use of picos and will provide a great application of CloudOS for testing and demonstration.

I also continue to work (slowly) on a docker instance of KRE so that others can easily run KRE on their own servers.

I'm hopeful that these developments will make picos, KRL, and CloudOS easier to use and between Fuse, Guard Tour, and SquareTag we'll have some interesting examples that demonstrate why all of this is relevant to building the Internet of Things.

Why is Blockchain Important

Colorful Wooden Blocks Children's Museum Macro April 17, 20114

The world is full of directories, registries, and ledgers—mappings from keys to values. We have traditionally relied on some central authority (whoever owns the ledger) to ensure its consistency and availability. Blockchain is a global-scale, practical ledger system that demonstrates consistency and availability without a central authority or owner. This is why blockchain matters.

This is not to say that blockchain is perfect and solves every problem. Specifically, while the blockchain easy to query, it is computationally expensive to update. That is the price for consistency (and is likely not something that can be overcome). But there is also a real cost for creating centrally controlled ledgers. DNS, banks, and so on aren't free. Cost limits applicability, but doesn't change the fact that a new thing exists in the world, a distributed ledger that works at global scale. That new thing will disrupt many existing institutions and processes that have traditionally had no choice but to use a central ledger.

The End of Kynetx and a New Beginning

Kynetx Sheild

Steve Fulling and I started Kynetx in 2008. The underlying technology, a rules engine, was originally designed to modify Web pages "at the glass" or, in other words, in the browser. We did that in various ways: planting tags, using Information Cards, and with the Kynetx Browser extension. We did some interesting things, had some great partners, and good customers. But ultimately, we weren't a mobile solution and there was no good way to deploy our browser-centric business model on mobile phones. Meanwhile mobile was where all the excitement was. This made it hard to get customers and raise money.

So we pivoted. Starting in 2011, we started to focus more on using our technology to create contextual experiences outside the browser and, more and more, that meant the Internet of Things. My book on Kynetx technology, The Live Web, was written during this transition and looking back, I can clearly see our developing ideas in the structure of the book. We did a Digital Asset Grid pilot for Swift in this new mode, built a personal inventory system called SquareTag that I still use, and then successfully completed a Kickstarter campaign for connected-cars called Fuse.

Along the way we had a lot of fun, worked with a lot of great people, and came close to the brass ring a few times. But Kynetx's runway finally ran out in 2014. We completed the Fuse kickstarter the first part of November 2013 and were very excited about the possibilities. We also had just signed a contract to complete a Guard Tour application using SquareTag technology. Then, at the end of November 2013 tragedy struck: Ed Orcutt, my chief collaborator and our lead developer died. In addition to the emotional toll that Ed's death caused for all of us, it also cost us dearly in momentum and ability to quickly get things done. In hindsight, if we'd executed perfectly and taken some very bold steps, we might have made it through, but we didn't. And we ran out of money before we could complete and deliver Fuse.

We did ultimately manage, with the help of credit cards and some friends and family, to deliver Fuse to Kickstarter backers in October of 2014, about six months later than we'd originally planned. By that point Kynetx had no employees, a pile of debt, and little hope for partnerships and revenue. The writing was on the wall and I clearly saw that we were going to have to find a way to unwind everything cleanly while doing our best to keep commitments to creditors and customers.

On April 3, 2015 the Kynetx shareholders voted to dissolve the company. I proposed that a new entity I've created, Pico Labs, would assume the debt in exchange for the few tangible (servers) and non-tangible (IP) assets that Kynetx had. The shareholders agreed with that proposal too because it seems the best way to provide a soft landing for customers. Carvoyant, the partner who supplied Fuse devices and connectivity has agreed to take on customer obligations for Fuse owners. Those two developments should allow me to keep Fuse going as an open source project.

I still believe the ideas we developed for how the Internet of Things could work are valuable. In particular, I remain passionate about creating a true Internet of things rather than a mere CompuServe of Things. I continue, along with a set of intrepid students, to work on the the Kynetx Rules Engine (KRE) and picos. We recently released a new set of developer tools and this summer plan to completely refactor CloudOS to incorporate everything we learned building Guard Tour and Fuse.

Kynetx was a great experience, one I'll be forever grateful for. Steve and I had a lot of fun, worked with great people who we love, and supplied a living for ourselves and others. We built some great technology and used that to influence some important conversations. I'm thankful for the confidence our shareholders showed by investing in us, our customers showed by buying from us, our employees showed by giving us their enthusiasm and passion, and, most importantly, our families displayed by their continued belief and sacrifice. I'm sorry to have let them down. But while Steve and I made plenty of mistakes, as founders do, I'm confident we worked as hard as we could to make this work every day, year in and year out.

Steve is off on a new adventure: Chargeback Ops, building a great merchant-focused chargeback processing service. I'm involved as an investor and technical advisor. If you need chargeback services, be sure to contact him.

Personally, I'm fine. I'm mostly through the sad part now, although I do still occasionally get emotional about Kynetx ending. I've returned to BYU and I'm having great fun working on several great projects like the University API and Domain of One's Own. It feels good, after startup life, to have people and resources to really move quickly. BYU is in a good position, from an IT perspective, to concentrate on strategic projects rather than merely trying to keep the ERP system working. Consequently, I feel like I'm getting things done that make a difference and that feels good.

Silo-Busting MyWord Editor is Now Public


I've written before about Dave Winer's nodeStorage project and his MyWord blogging tool. Yesterday Dave released the MyWord editor for creating blog posts.

I can see you yawning. You're thinking "Another blogging tool? Spare me! What's all the excitement?!?"

The excitement over a few simple ideas:

  • First, MyWord is a silo-buster. Dave's not launching a company or trying to suck you onto his platform so he can sell ads. Rather, he's happy to have you take his software and run it yourself. (Yes, there are other blogging platforms you can self-host, the monster-of-them-all Wordpress included. Read on.)
  • Second, the architecture of MyWord is based on Dave's open-source nodeStorage system. Dave's philosophy for nodeStorage is simple and matches my own ideas about user's owning and controlling their own data, instead of having that data stored in some company's database to serve its ambitions. I've called this the Personal Cloud Application Architecture (PCAA).

A PCAA separates the application data from the application. This has significant implications for how Web applications are built and used.

I set up an instance of nodeStorage for myself at Now when I use the MyWord editor (regardless of where it's hosted) I can configure it to use my storage node and the data is stored under my control. This is significant because I'm using Dave's application and my storage. I'm not hosting the application (although I can do that, if I like, since it's open source). I'm simply hosting data. Here's my first post using the MyWord editor with my nodeStorage.

Making this work, obviously, requires that the storage system respond in certain ways so that the application knows what to expect. The nodeStorage system provides that. But not just for MyWord, for any application that needs identity (provided through Twitter) and storage (provided by Amazon S3). Dave's provided several of these applications and I'm sure more are in the works.

If more people have access to nodeStorage-based systems, application developers could ignore the features it provides and focus on the app. I recognize that's a big "if", but I think it's a goal worth working toward.

Sessions I Want to Hold at IIW

IIW XX T-shirt Logo

Internet Identity Workshop XX is coming up in a few weeks (register here). IIW is an unconference, so if you're coming, you might want to start thinking about the sessions you want to hold. There's always room for more topics and the topics you bring are what makes IIW interesting.

I'm thinking about sessions on the following topics:

  1. The Future of Picos and Fuse—there are a lot of Fuse backers who come to IIW, so it's always a good place talk about what's happening with Fuse (and hopefully recruit some help to work on the open source project). There's a boat load of interesting developments happening below the surface that I hope to share. Whether you're a Fuse backer or you're just interested in an Internet of Things that doesn't depend on CompuServe 2.0 (aka Web 2.0), you'll get something out of this session.
  2. Bureaucracy—This might seems like a weird topic for IIW, but I think it's relevant in some very interesting ways. What I'd really like is for some people coming to IIW to read David Graeber's The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureaucracy (at least Chapter 1) before coming so we can use it as the basis for the discussion. Graeber's position is we now live in what he calls the "age of total bureaucratization." If you take that as a starting proposition, the question of what this means for the coming Internet of Things can be both fascinating and terrifying. Read the book and come prepared to discuss it!

By the way, the dog logo has long been a fixture at IIW. The one here will be on the 20th anniversary commemorative T-Shirt that you can add to your order when you register.

IBM's ADEPT Project: Rebooting the Internet of Things

IBM Think D100 Test

I recently spent some time learning about IBM's ADEPT project. ADEPT is a proof of concept for a completely decentralized Internet of Things. ADEPT is based on Telehash for peer-to-peer messaging, BitTorrent for decentralized file sharing, and the blockchain (via Ethereum) for smart contracts (this video from Primavera De Filippi on Ethereum is a good discussion of that concept).

The ideas and motivations behind the project as presented at IBM's Device Democracy align nicely many of the concerns I have raised about the Internet of Things. To get a feel for that, watch this video from Paul Brody, vice president and global electronics industry leader for IBM Global Business Services. Brody, speaking at the Smart Home session of the IFA+ Summit, says “I come not to praise the smart home, but to bury it.” Its worth watching the whole thing:

Note: the video doesn't show Brody's slides. I couldn't find these exact slides, but this presentation to Facebook looks like it's close if you want to see some of the visuals.

The project has a couple of white papers:

  • Device democracy- Saving the future of the Internet of Things (PDF) is a business-level discussion of why the Internet of Things is already broken and needs a reboot.
  • ADEPT: An IoT Practitioner Perspective (PDF) is a more technical look at the protocols they chose and how they come together to create a completely decentralized Internet of Things. The paper describes their proof of concept based on Telehash, Ethereum, and BitTorrent. It’s worth reading to understand the way they’re thinking about trust, privacy, and device-to-device (D2D) and device-to-vendor (D2V) interactions.

Brody says current IoT is broken, and won't scale because of

  • Broken business models
  • High cost
  • Lack of privacy
  • Not future-proof
  • Lack of functional value

One of the key ideas they discuss is autonomous coordination. This is critical in a world where any given person might have thousands of connected devices they interact with. we simply won't be able to coordinate it all ourselves (part of the reason the current IoT needs a reboot). For example, they use an example I've used of electrical devices coordinating their use of the home's power to avoid a surcharge from the electric company. That's a hard problem that doesn't easily admit centralized solutions.

The ADEPT concept imagines each device being connected directly to the Internet and consequently they spend some time dealing questions like "what if my device is too slow or doesn't have enough memory to use the blockchain?" One of the reasons I'm a fan of creating virtual proxies of physical devices via persistent compute objects (picos) is that they can provide processing and storage that a simple device might not be able to provide because it's too slow, too small, intermitently online and so on.

The more important reason for using virtual proxies on the Internet of Things is to provide representation for things that aren't physical things. People, places, organizations, concepts, and so on all need to interact with things. Picos provide an architecture for accomplishing that. Picos provide a foundation for the primary activities we need in a decentralized IoT:

  1. Distributed transaction processing and applications
  2. Peer-to-peer messaging and sharing
  3. Autonomous coordination and contracts between peers

And they do this for everything whether it has a processor or not.

The conclusion of the Digital Democracy white paper says of winners and losers in the IoT economy:

Winners will:

  • Enable decentralized peer-to-peer systems that allow for very low cost, privacy and long term sustainability in exchange for less direct control of data
  • Prepare for highly efficient, real-time digital marketplaces built on physical assets and services with new measures of credit and risk
  • Design for meaningful user experiences, rather than try to build large ecosystems or complex network solutions.

Losers will:

  • Continue to invest in and support high-cost infrastructure, and be unmindful of security and privacy that can lead to decades of balance sheet overhead
  • Fight for control of ecosystems and data, even when they have no measure of what its value will be
  • Attempt to build ecosystems but lose sight of the value created, probably slowing adoption and limiting the usage of their solutions.

One of the things I really like about the IBM vision is that they do a good job of tying all of this to business value. Speaking of the effect the Internet has had on the market for digital content they say "The IoT will enable a similar set of transformations, making the physical world as liquid, personalized and efficient as the digital one." They use the idea of "liquifying the physical world" to bring this home and discuss why this enables things like the following:

  • Finding, using, and paying for physical assets the same way we do digital content today
  • Matching supply and demand for physical good in real-time
  • Digitally manage risk and assess credit
  • Allow unsupervised use of systems and devices, reducing transaction and marketing costs
  • Digitally integrate value chains in real-time to instantly crowdsource and collaborate

This is a bold vision that aligns well with Doc Searls' thoughts expressed in The Intention Economy: When Customers Take Charge. This kind of business value is what will drive the IoT, not things like "turn on the lights when I get home." I think that's what Paul Brody meant when he said "I come not to praise the smart home, but to bury it." The smart home isn't where the business value will be and a centralized, proprietary, and closed vision for creating it is bound to fail.

I'm working on a white paper that lays out a similar reference architecture for the Internet of Things, so I find this project fascinating. More to come...



I simply love from Dave Winer. This is such a simple, beautiful idea. Like all such, it seems obvious once you've seen it. is JavaScript for rendering a blog page (or any page, for that matter) from a JSON description of the contents in the style that Medium pioneered.

To understand it click to the example JSON file of an article on Anti-Vaxxers and then use to render the contents of the JSON file. also supports Markdown.

The magic here is that there's no server running a Web app in the style of Web 2.0. Neither is there an API. The JSON file is on Dropbox and could be hosted anywhere. The "application" is all JavaScript running in the browser. The JavaScript could also be hosted anywhere too since Dave has shared the source code on Github.

This is an example of what I've been calling a person cloud application architecture (PCAA). The key idea is to separate the application from the data and allow the data to be hosted anywhere the owner chooses. The advantage is that there's no central server intermediating the interaction.

Dave's on a roll here. I wrote about his nodeStorage project a few week ago. I'm heartened that developers like Dave are building applications that support people being in control of their own data rather than having to surrender it to the database of some company and be forced to interact with it through the company's administrative identity.

Ambient Computing

A real Internet of Things will be immersive and pervasive.

Imagine a connected glass at your favorite restaurant. The glass might report what and how much you drank to your doctor (or the police), make a record for the bill or even charge directly for each glass, send usage statistics to its manufacturer, tweet when you toast your guest, tell the waitstaff when it’s empty or spilled, coordinate with the menu to highlight good pairings, or present to your Google Glasses as a stein or elegant goblet depending on what’s in it. Now imagine that the plates, silverware, tablecloth, table, chair, and room are doing the same.

In their book Trillions, Lucas, Ballay, and McManus present a vision for a near-future world where nearly everything is connected together. About this network, they say:

We have literally permeated our world with computation. But more significant than mere numbers is the fact we are quickly figuring out how to make those processors communicate with each other, and with us. We are about to be faced, not with a trillion isolated devices, but with a trillion-node network: a network whose scale and complexity will dwarf that of today’s Internet. And, unlike the Internet, this will be a network not of computation that we use, but of computation that we live in.

Ambient computing, as this is called, is as difficult for us to imagine as it is for us to imagine living underwater. To us, water is something that exists in cups, tubs, and pools. We notice it and use it or avoid it as necessary. But to a fish, water is ambient. They cannot avoid it. Whether it is crystal pure or horribly polluted, they live in it.


Derek the goldfish

This change, from computing as a thing we do to something that we exist within, will have vast impact on our lives. Like the fish in water, we will be immersed in a sea of computation. Our actions and our words will have impact beyond the current sphere.

Ambient computing will be inescapable. There will be no living outside of the computation. Every thing you do today will be intermediated by computation of some kind. A visit to the grocery store won't be possible with interaction with the smart packaging. Getting there won't be possible without smart vehicles that talk to smart roads and smart intersections. Preparing the food you buy will involve a smart power grid and connected appliances, pots, and pans. Even eliminating the waste will involve trash cans and toilets that are connected to the network.

Do we want to build this? That's the wrong question. Connecting everything is inevitable. Our choice is how we want things to be connected and who controls the devices, data, and processing.