On Names and Heterarchy

Names not to be forgotten

When I first started using Unix, DNS was not widely used. Instead we FTP'd hosts files from a computer at Berkeley, merged it with a local hosts file, and installed it in /etc. Mail addresses had ! in them to specify explicit internal routing from a well-known host to the local machine. We had machine names, but no global system for dereferencing them.

DNS changed all that by providing a decentralized naming service that based lookup on a hierarchy starting with a set of well-known machines servicing a top-level domain (TLD), like .com. Nothing was more important than a great domain name with a .com at the end during the 90's. URLs, or Uniform Resource Locators, a global naming system for web pages, was based on DNS, so having a short, memorable domain name was, and still is, an asset.

Of course the good domain names were quickly all gone. I was lucky enough to own a few good ones over the years: superbowl.com, skiutah.com, shoppingcart.com, imall.com, and stuff.com. I was also early enough to get my name, windley.com. If you're just coming to this party, however, your name is long gone unless you want to use a TLD that no one has heard of and won't recognize. Anyone in the .pe namespace?

Names

Names are used to refer to things. Without names, we'd constantly be describing people, places, and things to each other whenever we wanted to talk about them. You do that now when you can't remember someone's name: "You, know, the guy who was in the green shirt, with the beard, walking a dog?" Any given entity can have multiple names that all refer to the same thing. I'm Phil, Phillip, Phil Windley, Dad, and so on depending on the circumstance.

In computing, we use names for similar reasons. We want to easily refer to things like memory locations (variables), inodes (file names), IP addressed (domain names), and so on. Names usually possess several important properties, including:

  • Names should be unique within some specific namespace
  • Names should be memorable
  • Names should be short enough to type into computing devices by humans

As Crosbie Fitch points out in his excellent treatise on identity, names don't need to be globally unique, just unique enough. Names are identifiers we put on things that already have an identity. Names aren't the same thing as identity.

Do We Need Names?

Do we need names? At first blush everyone says "yes," but when you dig deeper there are lots of systems where we don't really need names at least not in the form of direct mapping between names and addresses.

The best example is the Web itself. URLs aren't names. They're addresses. While they are globally unique, they aren't memorable and most people hate typing them into things. If I'm looking for IBM, I'm happy to type ibm.com into my browser. But if I'm looking for a technical report by IBM from 2006? Even if I know the URL, I'm not likely to type it in, instead, I'll just search for it using a few key words. Most of the time that works so well that we're surprised when it doesn't.

There are several alternatives to globally unique names.

Discovery

When we type keywords into a search engine we're using an alternative to names: discovery.

The World Wide Web solved several important problems but discovery wasn't one of them. As a result, Aliweb, Yahoo!, and a host of other companies or projects sprung up to solve the discovery problem. Ultimately Google won the search engine wars of the late 90s. People have argued that search and discovery are natural monopolies. Maybe. But there are heterarchical methods of finding things.

When I mention this to people, I often get asked "what do you have against Google?" Nothing specifically against Google. But I think the model of centralized discovery, mail, communication, and friendship has significant drawbacks. The most obvious one is the problem of having a single point of failure. All of these products and companies will eventually go away, whether your done using them or not.

A larger problem is censorship. Notice that while many despotic regimes will try to shut down Twitter or some other centralized service from time to time, they have a much tougher time restricting access to and use of the larger Web and more so the Internet (yeah, there's a difference despite the media's confusion).

Larger still is the privacy question. Twitter, Facebook, Google, and their ilk are the stuff of dreams for tyrants, bullies, corporate spies, and other who wish you harm. But it's more insidious than that. The issue of online privacy isn't limited to conspiracy theories about some hypothetical threat. The real threat to our privacy isn't the NSA, it's the retailers and others who want to sell you stuff. They employ centralized systems like Facebook and Google every hour of every day to use your personal information against you. They'd claim they're using your data to help you. Ask yourself what percentage of all the ads you see in a week you consider helpful.

Personal Directories and Introductions

Discovery isn't the only way to get around a lack of names. To see how, think about your house address. It's a long unwieldy string of digits and letters. Resolving a person's name to their address has no global solution. That is, there's no global directory (except maybe at Acxiom or the NSA) that maps names to addresses. Even the Post Office in over 200 years of existence hasn't thought "Hey! We need to create a global directory of names and addresses!" Or if they have, it didn't succeed.

So how do we get around ? We exchange addresses with people and keep our own directories. We avoid security issues by exchanging or verifying addresses out of band. For the most part, this is good enough.

Personal directories are largely how people exchange bitcoins and other cryptocurrencies. I give you my bitcoin address in a separate channel (e.g. email, my web site, etc.). You store it in a personal directory on your own system. When you want to send me money, you put my bitcoin address in your wallet. To make it even more interesting, since bitcoin addresses are just public-private key pairs, I can generate a new one for every person creating what amount to personal, peer-to-peer channels for exchanging money.

When we built Forever, we relied on people using email for introducing their personal clouds to another one. This introduction ceremony provided a convenient way to exchange the long addresses of the personal cloud and stored them away for future use.

So long as there is some trusted way to communicate with the party you're connecting to, long addresses aren't as big a problem as you might think. We only need to resort to names and discovery when we don't have a trusted channel.

Heterachical Naming Systems

The problem with personal directories is that they make global look up difficult. Unless I have some pre-existing relationship with you or a friend who'll do an introduction, a personal directory does me little good. One way to solve this problem is with systems that work like DNS, but are heterarchical.

I've recently been playing with a few interesting naming systems based on bitcoin. Whatever you may think of bitcoin as a currency, there is little doubt that bitcoin presents a working example of a global distributed consensus system.

Distributed consensus is the foundational feature of a heterarchical naming system. To understand why, think about DNS. DNS distributes the responsibility of assigning names, but it avoids the problem of consensus (agreeing on what names stand for what IP addresses) by creating single copy of of the mapping. This single copy presents a single point of failure and a convenient means of censoring or even changing portions of the map.

If we want to distribute the copy of the mapping and make everyone responsible for maintaining their own mapping between names and addresses, we need a distributed consensus system. Bitcoin provides exactly such a system in the form of a block chain, a cryptographic data structure with a functional means of validating updates.

Onename.io and Namecoin are examples of systems than use the block chain to map names to addresses in a heterarchical fashion. I have registered windley.bit using Namecoin. If you type it in your browser it won't resolve since your operating system only knows how to resolve names via DNS, but that's not a fundamental limitation, you can patch your OS to resolve names using alternative mappings like Namecoin. Your OS didn't understand TCP/IP either in the distant past. I used to regularly patch Windows 3.1 by adding a TCP/IP stack. Windows 95 included it due to popular demand. Right now, I'm using a browser plugin from FreeSpeechMe to resolve .bit domains for me.

What's the advantage of windley.bit over windley.com? Simply that the mapping is completely distributed. There is no single point of failure. You can turn off all the TLD servers and windley.bit will still work. One of the key provisions of the Stop Online Piracy Act (fortunately dead for now), would have used DNS to censor Web sites deemed to be infringing. Heterarchical directories would be immune from such silliness.

Aside: Namecoin is actually a general purpose distributed key-value store. So, domain names are just one thing you can do with it.

Conclusion

I'm very excited about heterarchical technologies coming into play. I believe the near future will incorporate computers into more facets of our lives than we can even imagine. If we're to trust those computers and avoid giving up autonomy to centralized authorities, heterarchical structures will be fundamental. I don't think it's going to far to say that our natural rights as human beings are based on a world that is heterarchical (at the global level) and that we are fooling ourselves if we believe we can engineer virtual systems that respect or protect those rights using hierarchies and centralized authorities.

Bonus link: Adriana Lukas has an excellent talk at TEDxKoeln on heterarchies and key principles.


Automatically Run the KRL Parser When You Commit Code

Git-Logo-2Color.png

I'm tired of running the KRL parser from the command line every time I check in code. And based on the number of my students who've had problems with unparsable code that has been checked in, I'm not the only one.

I've been meaning to create a pre-commit hook for Git that runs the parser whenever you commit code. I just checked in code to the KRL Parser repo that provides a pre-commit hook that should work for users of Linux and OS X. If someone wants to create a Window's version, I'd be happy to include that in the repo. Just send me a pull request.

To use the hook, you need to copy pre-commit to the .git/hooks directory in your KRL repository and change the value of $PARSER to where you've installed the krl-parser.pl program. Note that you need to do this for any repository where you want the parser to automatically check KRL code. The hook checks every file with a .krl extension.

Overall, this works pretty well and will, over the course or time, save me time and effort.


The Coming Century of War Against Your Computer

Read this short post on Intel Processors to Become OS Locked, then listen to Cory Doctorow's The Coming Century of War Against Your Computer. Cory's actual talk is about 45 minutes followed by another 45 minutes of question and answer.

As Cory points out, computers exercise a particular intersection of property and human rights that makes for some interesting societal questions. Sometimes, the questions are ones for which we've already got a meatspace precedent that we choose to ignore when computers are involved. For example, can your employer spy on what you put in email because they own the network and computer? Many find nothing wrong with that. After all, they own the computer and the network. But we'd come to different conclusions about a plan to spy on all the conversations in the lunchroom even though they own the building.

In other cases we are faced with entirely new situations. Can I decide what software runs on my cochlear implants or does the manufacturer control that? What if a competitor comes out a much better algorithm that is compatible with my implants? Who controls the code? The owner? The user (who many not be the same person)? The manufacturer? Who has what property rights and when do human rights trump them?

I'm a big fan of Cory's. I like his books and I think he's eminently well-spoken. His novels raise these issues and tell great stories at the same time. In a similar vein, I just had my class of Computer Science seniors read Rainbows End (by Venor Vinge), a book that's full of technology that raises these kinds of questions inside the vehicle of a fun story.


Substitutability is an Indispensable Property for the Internet of Things

In a recent update on the Fuse Kickstarter project, I posted a picture showing a possible UI flow for linking the Fuse app to the back end service that provides Fuse with all it's smarts. If you've been following along, you'll know that I propose that the service act as a personal cloud and that the app be built using the Personal Cloud Application Architecture.

Here's the initial flow I proposed:

fuse-signup

The interaction proceeds left to right. Screen (2) follows the user clicking "Let's Go!" on screen (1). Screen (3) follows the sign in on (2). Screen (4) (or something like it) follows the user clicking on "Allow" on (3) if there are vehicles in the personal cloud. If not, the flow would lead to an "Add car" flow. I'm not showing the "More info" page or the account creation page. The "More info" page would talk more about the Fuse personal cloud and our policies around personal data.

Screen (3) is a "consent screen" asking the user for consent to allow the app to link to their personal cloud. Adrian Gropper wrote back to me:

Hi Phil,

In general, I have a problem with the feel of consent screens like #3 because they don't offer a meaningful choice. They feel like a standard click-through agreement and, as such, do injustice to the personal cloud concept we're promoting.

I don't have a clear solution for this problem in the Fuse application. In general, I'd hope to see at least 3 choices:

  • full functionality if I trust the service provider
  • partial functionality if I share nothing with the service provider yet and may choose one later
  • no functionality if I want to pause and investigate service provider options

In other words, meaningful choice means that the service providers are substitutable and compete for my trust and that this is made clear to me at the point of giving consent.

Adrian

Adrian's last graf has a vital point that anyone interested in the larger Internet of Things must pay attention to: "meaningful choice means that the service providers are substitutable and compete for my trust".

Substitutablity is a key feature of decentralized systems. I can change my ISP without loss of functionality. I can change Web hosting providers. I can change email providers. I can even provide all of those things myself if I like and have it all work the same. Many people have used the same email address for 20 years now, all the while using a variety of email providers.

Unfortunately, the world that Web 2.0 gave us is decidedly unsubstitutable. I can use Twitter, Facebook, or both, but they're not substitutable in the same way as email. I can't substitute my iPhone for a Nexus without finding new apps and learning new things. I can't substitute one API for another because there are no standards. If the future is social, mobile, and cloud, we've given up a lot to get there. I talk about this at length in Why Personal Clouds Needs an OS.

I fully intend for people to be able to link their Fuse app to an alternate system (that speaks the Fuse API). Until the time that such alternate systems exist, however, Adrian's comment is spot on. Right now, the consent screen will largely be a "agree or else" proposition and that doesn't feel right. But without the backend system there's no graceful fallback to partial functionality. Kind of a Catch-22. If you've got suggestions, I'm open to them.

We need personal clouds. They need to be interoperable. They need to be substitutable. Our personal data must be portable. As Adrian says above, that's the way we build a system that we can trust. Choice in email allows me to find an email provider I trust. Choice in ISPs (for those of us not in a Comcast wasteland) let's me find one I trust.

What happens if you stop trusting Facebook? Fortunately, for most of us, Facebook isn't something we have to have to do our jobs, so we could just stop using it. But what about when the Internet of Things is fully in play, you've got a thousand connected products in your life and you stop trusting the company that provides their cloud backend? Unless that infrastructure is substitutable, you're stuck. Privacy, functionality, and affordability are all at risk without substitutability.


Coinbase Shenanigans

Coinbase logo

A while back I started playing with bitcoin. I don't want to mine it, I just wanted some, so I got a wallet and signed up for Coinbase on a friend's recommendation. I linked to my bank account and bought $45 worth of bitcoin. It takes 4 days for the transaction, so I sat back and waited. The transaction cleared, and I got my bitcoin. Yeah!

So, I decided I wanted a little more. I bought 1 bitcoin at $560 on Feb 28. Sit back and wait for the transaction to clear. Today I got this email:

Hi Phil Windley,

On Feb 28, 2014 you purchased 1.00 BTC via bank transfer for $560.69.

Unfortunately, we have decided to cancel this order because it appears to be high risk. We do not send out any bitcoins on high risk transactions, and you will receive a refund to your bank account in 3-4 business days.

Please understand that we do this to keep the community safe and avoid fraudulent transactions. Apologies if you are one of the good users who gets caught up in this preventative measure - we don't get it right 100% of the time, but we need to be cautious when it comes to preventing fraud.

You may have more luck trying again in a few weeks. Best of luck and thank you for trying Coinbase.

Kind regards,
The Coinbase Team

Huh? I'm not sure why they think it's high-risk, but I do know a few things:

  • The money left my account two days ago and it's going to take them 304 days to put it back. They got free free use of my money for 5-6 days. You could make a pretty good business off the float if you have enough transactions.
  • How much risk can there be when they've had my money for 2 days? It's not like I'm able to do some kind of clawback or something.
  • I bought bitcoin at $560 and today it's selling for $667, so maybe the high-risk was that they don't want to fulfill transaction that happen when the price goes up. I wonder if this transaction would be "high-risk" if bitcoin had been selling for $460 today?

All in all, a little disappointing. Maybe I'll have to find a different broker. Suggestions?


Vehicles That Get Better Over Time

The 10k

Smartphones have made us used to the idea that things can get better over time. If you're using a less-than-new iPhone or Android handset, chances are it's better now than it was when you bought it because you've upgraded the operating system. If you throw apps in, it gets better every day cause some app or another gets a new version. When I say "gets better", I mean bugs get fixed, performance (often) improves, new features get added. and so on. Sure, some "updates" aren't the improvements we were hoping for, but the overall trend is towards "better."

Contrast that with your car. From the moment you buy it, it gets worse. You never take it to the dealer and have them say "oh there's a better engine out now, so we upgraded your car while it was here." Hardware doesn't work like that. Hardware upgrades cost money. Software upgrades tend toward free.

But a greater percentage of every manufactured good, cars included, is software. That trend will accelerate over time. What's interesting is that car manufacturers largely treat the software in the car the same way they treat hardware. They'll update it if there's a safety issue, but otherwise, it never changes.

I have a Ford F-150 with Microsoft Sync, so I was interested in this article saying that Ford is ditching Sync in favor of the QNX platform. One of the things that I loved about Sync was that I was able to upgrade the firmware in my truck a few times. It was clunky, but it could be done. My truck got a little better.

The problem is that it was only a few times, early on, and there were never any cool new features, just some bug fixes. Car manufacturers just don't think of their product that way. I'm not surprised that people have been largely unsatisfied with Sync, but I suspect the problem is more the product culture at Ford than inherent problems with Sync. If Ford and other manufacturers don't start thinking of their products the way smartphone makers think of theirs, people will remain unsatisfied.

Adapting to this new requirement will require focusing on the software and making it as much, or more, of the overall product as the hardware. That means more than just giving token support to older models. Sure, they're going to become obsolete and get left behind, but that can't happen after one year. Car makers will have to own the software-mediated experience and work to make it better, bringing owners of older models along as much as they can.

Smartphones have gotten us used to things that are mostly software and, consequently, get better over time. Every other manufacturer of durable goods will have to follow suit. Their overall success will likely be a product of how well they adapt to this new fact of life.


Fargo and Personal Cloud Application Architectures

Gleaming Fargo

I started blogging in 2002 using Radio Userland. Radio was designed an built by Dave Winer. Dave's roots are in outliners and so is his most recent project, Fargo. Fargo is clean, simple, and easy to use. In Dave's words:

Fargo is a simple idea outliner, notepad, todo list, blogging tool, project organizer.

It's an HTML 5 application, written in JavaScript, runs in any compatible browser, including Chrome, Safari, Firefox, Microsoft IE 10.

Files are stored in Dropbox, using the Dropbox API. They are accessible anywhere Dropbox is. You can share files with other users, or publicly.

From What is Fargo?
Referenced Mon Feb 24 2014 09:11:22 GMT-0700 (MST)

Yeah, Fargo is a Web application with no backend. Or, more precisely, Fargo is a Web application that uses Dropbox as it's backend. Take a minute and go link Fargo to your Dropbox account and create an outline. Then go look at Dropbox and find your Fargo files (they're in Dropbox/Apps/Fargo). If you're syncing your Dropbox account with a folder on your computer, they're right there, on your hard drive—completely under your control. Wrapping your head around this idea is important.

Want to export the data from Fargo? No need, it's already on your hard drive.

Want to wipe it out for some reason? Go ahead, just delete it from Dropbox. It's all yours.

Want to remove Fargo's access, but keep the data? No problem. Just unlink Fargo from your Dropbox account. The data's still there on your hard drive.

I've been calling this programming model the personal cloud application architecture (PCAA). Others call these applications "unhosted". PCAA apps use a "back-end as a service" (BaaS) system of one type or another to provide storage and other services for the application. Because of its API, Dropbox is a simple BaaS system that provides storage under user control. Other examples include remoteStorage and usergrid. For the last several years, I've been working on CloudOS, with similar goals.

A traditional web application looks something like this:

standard_web_architecture

The browser talks to the Web application on a server that fronts a data store where user data is stored. Not surprisingly, many Web 2.0 business models follow this same architecture, deriving their revenue from the fact that they have "captured" their customers.

In contrast, a PCAA application looks like this:

unhosted_web_architecture

In a PCAA app, the application and storage (at least for user data) are separate. Ideally, the user can pick where the application data is stored, even self-host if they like. In a PCAA app, users are not captured. Rather, they are free-range customers.

There are several advantages to PCAA applications:

  • User control—as we discussed above, this architecture gives users a lot more control over their data. This is why I call it a "personal cloud" application architecture, because the backend infrastructure is under the user's control—it's their personal cloud.
  • Less developer overhead—Developers can forget about the hassles involved in operating complex server to store and manage user data. They get to concentrate on the application.
  • Less expense—Running backend infrastructure is expensive. PCAA apps use someone else's infrastructure.

The hassle and expense of designing, building, and running backend infrastructure keeps a lot of interesting applications from being built.

One objection I sometimes hear is that with a HTML5 application, like Fargo, there's nothing to keep someone else from stealing the application. That's true, if that bothers you. On the other hand, one of the reasons for the viral adoption of the Web was that developers could view the source of any Web site and understand how it works. But if you're not in a sharing mood, there's nothing about PCAA apps that requires HTML5 and Javascript. You could just as easily build one in Ruby or Python and run it on a server. What makes it a PCAA app isn't the language it's written in, but how it treats user data.

We've written a number of applications using the PCAA style with CloudOS providing the backend infrastructure including Forever, an evergreen contact application, a simple Todo application that I use for demonstrations, and a Guard Tour application.

One of the interesting things about PCAA apps is they don't have accounts as we've come to understand them. I don't have an account at Fargo. Fargo has users, but no accounts. You could, of course, add them if needed, but most of the time, they're simply not needed.

In a world of APIs and services, creating common backend services makes sense. One of the key drivers of innovation in computers has always been modularity. Making storage and other common services (like subscriptions, notifications, configuration management, calendaring, application support, etc.) modular will further accelerate the growth of applications by making them easier and cheaper to build.


Pico Event Evaluation Cycle

Let’s Monkey Around

In my post on the event-query API that picos present to the world, I talked about picos listening for events and responding, but there wasn't a lot of detail in how that worked. This post will describe the event evaluation cycle in some detail. Understanding these details can help developers have a better feel for how the rulesets in a pico will be evaluated for a given event.

Each pico presents an event loop that handles events sent to the pico according to the rulesets that are installed in it. The following diagram shows the five phases of event evaluation. Note that evaluation is a cycle like any interpreter. The event represents the input to the interpreter that causes the cycle to happen. Once that event has been evaluated, the pico waits for another event.

event eval cycle

We'll discuss the five stages in order.

Wait

The wait phase is where picos spend more of their time. For efficiency sake, the pico is suspended during the wait phase. When an event is received KRE (Kinetic Rules Engine) wakes the pico up and begins executing the cycle. Unsuspending a pico is a very lightweight operation.

Decode Event

The decode phase performs a simple task of unpacking the event from whatever method was used to transport it and putting it in a standard RequestInfo object. The RequestInfo object is used for the remainder of the event evaluation cycle whenever information about the event is needed.

While most events, at present, are transported over HTTP via the Sky Event API, that needn't be the case. Events can be transported via any means for which a decoder exists. In addition to Sky Event, there is also support for an SMTP transport called Sky Mail. Other transports (e.g. XMPP, RabbitMQ, etc.) could be supported with minimal effort.

Schedule Rules

The rule scheduling phase is very important to the overall operation of the pico since building the schedule determines what will happen for the remainder of the event evaluation cycle.

Rules are scheduled using a salience graph that shows, for any given event domain and event type, which rules are salient. The event in the RequestInfo object will have a single event domain and event type. The domain and type are used to look up the list of rules that are listening for that event domain and type combination. Those rules are added to the schedule.

The salience graph is calculated from the rulesets installed in the pico. Whenever the collection of rulesets for a pico changes, the salience graph is recalculated. There is a single salience graph for each pico. The salience graph determines for which events a rule is listening by using the rule's event expression.

Rule order matters within a ruleset. KRE ensures that rules appear in the schedule in the order they appear in the ruleset. No such ordering exists for rulesets, however, so there is no guarantee that rules from one ruleset will be evaluated before or after those of another unless the programmer takes explicit steps to ensure that they are (see the discussion of explicit events below).

The salience graph creates an event bus for the pico, ensuring that as rulesets are installed their rules are automatically subscribed to the events for which they listen.

Rule Evaluation

The rule evaluation phase is where the good stuff happens, at least from the developer's standpoint. The engine runs down the schedule, picking off rules one by one, evaluating the event expression to see if that rule is fired and then, if it is, executing the rule. Note that a rule can be one the schedule because it's listening for an event, but still not be selected because it's event expression doesn't reach a final state. There might be other event that have to be raised before it is complete.

Four purposes of understanding the event evaluation cycle, most of what happens in rule execution is irrelevant. The exception is the raise statement in the rule's postlude. The raise statement allows developers to raise an event as one of the results of the rule evaluation. Raising explicit events is a powerful tool.

From the standpoint of the event evaluation cycle, however, explicit events are a complicating factor because they modify the schedule. Explicit events are not like function calls or actions because they do not represent a change in the flow of control. Instead, an explicit event causes the engine to modify the schedule, possibly appending new rules. Once that has happened rule execution takes up where it left off in the schedule. The schedule is always evaluated in order and new rules are always simply appended. This means that all the rules that were scheduled because of the original event will be evaluated before any rules schedule because of explicit events. Programmers can also use event expressions to order rule evaluation.

If the rule makes a synchronous call to an external API, rule execution waits for the external resource to respond. If a rule sends an event to another pico, that sets off another independent event evaluation cycle, it doesn't modify the schedule for the cycle execution the event:send(). Inter-pico events are sent asynchronously by default.

Assembling the Response

The final response is assembled from the output of all the rules that fired. The idea of an event having a response is unusual. For KRE it's a historic happenstance that has proven useful. Events raised asynchronously never have responses. For events raised synchronously, the response is most useful as a way to ensure the event was received and processed. But the response can have real utility as well.

Historically, KRE returned JavaScript as the result of executing rules. That has been expanded so that the result can be JSON or other correctly mime-typed content. This presents challenges for the engine since rules could be written by many different developers and yet there can be only one result type.

Presently the engine handles this by assuming that any response to an event with the domain web will be JavaScript and otherwise be a directive document (JSON with a specific schema). This suits for many purposes, but doesn't admit raw responses such as images or even just a JSON doc that isn't a directive. The engine tries to put a correctly formatted response together as best it can, but more work is needed, especially in handling raw responses.

This isn't usually a problem because the semantics of a particular event usually imply a specific kind of response (much as we've determined up front that JavaScript is the correct response for events with a web domain). Over time, I expect more and more events will be raised asynchronously and the response document will become less important.

Waiting...Again

Once the response has been returned, the pico waits for another event.

Conclusion

For simple rulesets, a programmer can largely ignore what's happening under the covers and the preceding discussion is overkill. But applications that consist of multiple rulesets, complex pico structures, or many explicit events can have asynchronous interactions that the developer must understand to avoid pitfalls like race conditions.

If you'd like to add other transports to KRE, we welcome the help. KRE is an open source project hosted on Github. We're happy to help you get started.

The event evaluation cycle described above presents programmers with a flexible, programmable, cloud-based way to respond to events. Explicit events add significant power by allowing the rule schedule to be expanded on the fly. The salience graph provides a level of indirection, binding events to rules in a way that is loosely coupled and dynamic. The event loop is the source of much of KRL's power.


Related: I originally introduced the idea of KRE creating an event loop in A Big, Programmable Event Loop in the Cloud. At that time, 2010, the concept of picos and salience graphs was not fully developed. Those were made possible by the introduction of the Sky Event API in 2011.


10-4 Good Buddy! Vehicle to Vehicle Communication

headlight

On Feb 3, a Federal District judge issued an injunction (PDF) that said the First Amendment gives people the right to communicate on the road. The ruling was in response to a lawsuit against the city of Ellisville, MO for an ordinance that forbid motorists to flash their lights at each other. The city was in the habit of handing out $1000 fines for light flashing. (Reason.com article)

The same day I saw that, I also read Government Wants Cars To Talk To Each Other in Time regarding the Dept. of Transportation's V2V or vehicle-to-vehicle initiatives:

The government agency estimates that vehicle-to-vehicle (v2v) communication could prevent up to 80 percent of accidents that don’t involve drunk drivers or mechanical failure.

The DoT proposal would require all car manufacturers to install v2v communications in cars and other light vehicles. The systems typically feature transponders able to communicate a car’s location, direction and speed at up to 10 times per second to other cars surrounding it, using a dedicated radio spectrum similar to WiFi. The vehicle would then alert its driver to a potential collision. Some systems could automatically slow the car down to avoid an accident.

Will people have access to these V2V systems? Will I be able to use it to send my own messages to the cars around me? Will I be able to control what messages get sent? Not likely.

Instead, the government could use such systems to spy on you or even restrict how and where you drive. There's nothing that would keep such a system from forcing a vehicle to drive the speed limit. Or keep it from entering certain areas. You car simply wouldn't go if you tried to take it somewhere the government hadn't authorized. Of course, the DoT and others will tout the safety benefits, but there are also significant opportunities for the Nanny State to repress or, worse, the Surveillance State to spy. The car, long a symbol of freedom, would become a mere means of getting from point A to point B, so long as point B is in the list of authorized destinations for you.

While such DoT-mandated V2V are likely inevitable, we can work to ensure the protect driver privacy and restrict their use to curtail freedom of movement. There's also nothing to prevent us from creating our own V2V systems that send the messages we want to send. Our goal with Fuse is to give your vehicle a voice that speaks to and for you. You control what your vehicle says and who is says it to. No reason it couldn't be speaking to other cars. Think of it like CB radio for the 21st century.


Using jQuery Mobile and Backbone

Bicycle & Jogger with child ride along the bike path

I've been interested in using Backbone as part of my jQuery apps for a while. I finally got around to playing with it. I found a good set of tutorials by Steve Smith that builds a simple activity tracker.

Good, that is, except that there's something wrong with the formatting so you can't actually see the code snippets and some parts just seem to be missing altogether. A little hard to read. The saving grace is that Steve created a Github repo of the whole thing. What's especially cool is that each of the transformations outlined in his four blog posts is a different branch. Consequently you can check the whole thing out and experiment with each by merely checking out a different branch and reloading.

I wanted to use this model but jQuery, jQuery Mobile, and Backbone.js have all been updated since the example was written. Before using the tutorial as the basis for something I needed to make sure it all still worked with the latest versions. So, I forked the repo and added a new branch, update-to-jqm14-bb11, and made it compliant with the new versions. If you clone my repo and check out that branch, you'll have a simple activities demo that uses the latest code.