Coinbase Shenanigans

Coinbase logo

A while back I started playing with bitcoin. I don't want to mine it, I just wanted some, so I got a wallet and signed up for Coinbase on a friend's recommendation. I linked to my bank account and bought $45 worth of bitcoin. It takes 4 days for the transaction, so I sat back and waited. The transaction cleared, and I got my bitcoin. Yeah!

So, I decided I wanted a little more. I bought 1 bitcoin at $560 on Feb 28. Sit back and wait for the transaction to clear. Today I got this email:

Hi Phil Windley,

On Feb 28, 2014 you purchased 1.00 BTC via bank transfer for $560.69.

Unfortunately, we have decided to cancel this order because it appears to be high risk. We do not send out any bitcoins on high risk transactions, and you will receive a refund to your bank account in 3-4 business days.

Please understand that we do this to keep the community safe and avoid fraudulent transactions. Apologies if you are one of the good users who gets caught up in this preventative measure - we don't get it right 100% of the time, but we need to be cautious when it comes to preventing fraud.

You may have more luck trying again in a few weeks. Best of luck and thank you for trying Coinbase.

Kind regards,
The Coinbase Team

Huh? I'm not sure why they think it's high-risk, but I do know a few things:

  • The money left my account two days ago and it's going to take them 304 days to put it back. They got free free use of my money for 5-6 days. You could make a pretty good business off the float if you have enough transactions.
  • How much risk can there be when they've had my money for 2 days? It's not like I'm able to do some kind of clawback or something.
  • I bought bitcoin at $560 and today it's selling for $667, so maybe the high-risk was that they don't want to fulfill transaction that happen when the price goes up. I wonder if this transaction would be "high-risk" if bitcoin had been selling for $460 today?

All in all, a little disappointing. Maybe I'll have to find a different broker. Suggestions?


Vehicles That Get Better Over Time

The 10k

Smartphones have made us used to the idea that things can get better over time. If you're using a less-than-new iPhone or Android handset, chances are it's better now than it was when you bought it because you've upgraded the operating system. If you throw apps in, it gets better every day cause some app or another gets a new version. When I say "gets better", I mean bugs get fixed, performance (often) improves, new features get added. and so on. Sure, some "updates" aren't the improvements we were hoping for, but the overall trend is towards "better."

Contrast that with your car. From the moment you buy it, it gets worse. You never take it to the dealer and have them say "oh there's a better engine out now, so we upgraded your car while it was here." Hardware doesn't work like that. Hardware upgrades cost money. Software upgrades tend toward free.

But a greater percentage of every manufactured good, cars included, is software. That trend will accelerate over time. What's interesting is that car manufacturers largely treat the software in the car the same way they treat hardware. They'll update it if there's a safety issue, but otherwise, it never changes.

I have a Ford F-150 with Microsoft Sync, so I was interested in this article saying that Ford is ditching Sync in favor of the QNX platform. One of the things that I loved about Sync was that I was able to upgrade the firmware in my truck a few times. It was clunky, but it could be done. My truck got a little better.

The problem is that it was only a few times, early on, and there were never any cool new features, just some bug fixes. Car manufacturers just don't think of their product that way. I'm not surprised that people have been largely unsatisfied with Sync, but I suspect the problem is more the product culture at Ford than inherent problems with Sync. If Ford and other manufacturers don't start thinking of their products the way smartphone makers think of theirs, people will remain unsatisfied.

Adapting to this new requirement will require focusing on the software and making it as much, or more, of the overall product as the hardware. That means more than just giving token support to older models. Sure, they're going to become obsolete and get left behind, but that can't happen after one year. Car makers will have to own the software-mediated experience and work to make it better, bringing owners of older models along as much as they can.

Smartphones have gotten us used to things that are mostly software and, consequently, get better over time. Every other manufacturer of durable goods will have to follow suit. Their overall success will likely be a product of how well they adapt to this new fact of life.


Fargo and Personal Cloud Application Architectures

Gleaming Fargo

I started blogging in 2002 using Radio Userland. Radio was designed an built by Dave Winer. Dave's roots are in outliners and so is his most recent project, Fargo. Fargo is clean, simple, and easy to use. In Dave's words:

Fargo is a simple idea outliner, notepad, todo list, blogging tool, project organizer.

It's an HTML 5 application, written in JavaScript, runs in any compatible browser, including Chrome, Safari, Firefox, Microsoft IE 10.

Files are stored in Dropbox, using the Dropbox API. They are accessible anywhere Dropbox is. You can share files with other users, or publicly.

From What is Fargo?
Referenced Mon Feb 24 2014 09:11:22 GMT-0700 (MST)

Yeah, Fargo is a Web application with no backend. Or, more precisely, Fargo is a Web application that uses Dropbox as it's backend. Take a minute and go link Fargo to your Dropbox account and create an outline. Then go look at Dropbox and find your Fargo files (they're in Dropbox/Apps/Fargo). If you're syncing your Dropbox account with a folder on your computer, they're right there, on your hard drive—completely under your control. Wrapping your head around this idea is important.

Want to export the data from Fargo? No need, it's already on your hard drive.

Want to wipe it out for some reason? Go ahead, just delete it from Dropbox. It's all yours.

Want to remove Fargo's access, but keep the data? No problem. Just unlink Fargo from your Dropbox account. The data's still there on your hard drive.

I've been calling this programming model the personal cloud application architecture (PCAA). Others call these applications "unhosted". PCAA apps use a "back-end as a service" (BaaS) system of one type or another to provide storage and other services for the application. Because of its API, Dropbox is a simple BaaS system that provides storage under user control. Other examples include remoteStorage and usergrid. For the last several years, I've been working on CloudOS, with similar goals.

A traditional web application looks something like this:

standard_web_architecture

The browser talks to the Web application on a server that fronts a data store where user data is stored. Not surprisingly, many Web 2.0 business models follow this same architecture, deriving their revenue from the fact that they have "captured" their customers.

In contrast, a PCAA application looks like this:

unhosted_web_architecture

In a PCAA app, the application and storage (at least for user data) are separate. Ideally, the user can pick where the application data is stored, even self-host if they like. In a PCAA app, users are not captured. Rather, they are free-range customers.

There are several advantages to PCAA applications:

  • User control—as we discussed above, this architecture gives users a lot more control over their data. This is why I call it a "personal cloud" application architecture, because the backend infrastructure is under the user's control—it's their personal cloud.
  • Less developer overhead—Developers can forget about the hassles involved in operating complex server to store and manage user data. They get to concentrate on the application.
  • Less expense—Running backend infrastructure is expensive. PCAA apps use someone else's infrastructure.

The hassle and expense of designing, building, and running backend infrastructure keeps a lot of interesting applications from being built.

One objection I sometimes hear is that with a HTML5 application, like Fargo, there's nothing to keep someone else from stealing the application. That's true, if that bothers you. On the other hand, one of the reasons for the viral adoption of the Web was that developers could view the source of any Web site and understand how it works. But if you're not in a sharing mood, there's nothing about PCAA apps that requires HTML5 and Javascript. You could just as easily build one in Ruby or Python and run it on a server. What makes it a PCAA app isn't the language it's written in, but how it treats user data.

We've written a number of applications using the PCAA style with CloudOS providing the backend infrastructure including Forever, an evergreen contact application, a simple Todo application that I use for demonstrations, and a Guard Tour application.

One of the interesting things about PCAA apps is they don't have accounts as we've come to understand them. I don't have an account at Fargo. Fargo has users, but no accounts. You could, of course, add them if needed, but most of the time, they're simply not needed.

In a world of APIs and services, creating common backend services makes sense. One of the key drivers of innovation in computers has always been modularity. Making storage and other common services (like subscriptions, notifications, configuration management, calendaring, application support, etc.) modular will further accelerate the growth of applications by making them easier and cheaper to build.


Pico Event Evaluation Cycle

Let’s Monkey Around

In my post on the event-query API that picos present to the world, I talked about picos listening for events and responding, but there wasn't a lot of detail in how that worked. This post will describe the event evaluation cycle in some detail. Understanding these details can help developers have a better feel for how the rulesets in a pico will be evaluated for a given event.

Each pico presents an event loop that handles events sent to the pico according to the rulesets that are installed in it. The following diagram shows the five phases of event evaluation. Note that evaluation is a cycle like any interpreter. The event represents the input to the interpreter that causes the cycle to happen. Once that event has been evaluated, the pico waits for another event.

event eval cycle

We'll discuss the five stages in order.

Wait

The wait phase is where picos spend more of their time. For efficiency sake, the pico is suspended during the wait phase. When an event is received KRE (Kinetic Rules Engine) wakes the pico up and begins executing the cycle. Unsuspending a pico is a very lightweight operation.

Decode Event

The decode phase performs a simple task of unpacking the event from whatever method was used to transport it and putting it in a standard RequestInfo object. The RequestInfo object is used for the remainder of the event evaluation cycle whenever information about the event is needed.

While most events, at present, are transported over HTTP via the Sky Event API, that needn't be the case. Events can be transported via any means for which a decoder exists. In addition to Sky Event, there is also support for an SMTP transport called Sky Mail. Other transports (e.g. XMPP, RabbitMQ, etc.) could be supported with minimal effort.

Schedule Rules

The rule scheduling phase is very important to the overall operation of the pico since building the schedule determines what will happen for the remainder of the event evaluation cycle.

Rules are scheduled using a salience graph that shows, for any given event domain and event type, which rules are salient. The event in the RequestInfo object will have a single event domain and event type. The domain and type are used to look up the list of rules that are listening for that event domain and type combination. Those rules are added to the schedule.

The salience graph is calculated from the rulesets installed in the pico. Whenever the collection of rulesets for a pico changes, the salience graph is recalculated. There is a single salience graph for each pico. The salience graph determines for which events a rule is listening by using the rule's event expression.

Rule order matters within a ruleset. KRE ensures that rules appear in the schedule in the order they appear in the ruleset. No such ordering exists for rulesets, however, so there is no guarantee that rules from one ruleset will be evaluated before or after those of another unless the programmer takes explicit steps to ensure that they are (see the discussion of explicit events below).

The salience graph creates an event bus for the pico, ensuring that as rulesets are installed their rules are automatically subscribed to the events for which they listen.

Rule Evaluation

The rule evaluation phase is where the good stuff happens, at least from the developer's standpoint. The engine runs down the schedule, picking off rules one by one, evaluating the event expression to see if that rule is fired and then, if it is, executing the rule. Note that a rule can be one the schedule because it's listening for an event, but still not be selected because it's event expression doesn't reach a final state. There might be other event that have to be raised before it is complete.

Four purposes of understanding the event evaluation cycle, most of what happens in rule execution is irrelevant. The exception is the raise statement in the rule's postlude. The raise statement allows developers to raise an event as one of the results of the rule evaluation. Raising explicit events is a powerful tool.

From the standpoint of the event evaluation cycle, however, explicit events are a complicating factor because they modify the schedule. Explicit events are not like function calls or actions because they do not represent a change in the flow of control. Instead, an explicit event causes the engine to modify the schedule, possibly appending new rules. Once that has happened rule execution takes up where it left off in the schedule. The schedule is always evaluated in order and new rules are always simply appended. This means that all the rules that were scheduled because of the original event will be evaluated before any rules schedule because of explicit events. Programmers can also use event expressions to order rule evaluation.

If the rule makes a synchronous call to an external API, rule execution waits for the external resource to respond. If a rule sends an event to another pico, that sets off another independent event evaluation cycle, it doesn't modify the schedule for the cycle execution the event:send(). Inter-pico events are sent asynchronously by default.

Assembling the Response

The final response is assembled from the output of all the rules that fired. The idea of an event having a response is unusual. For KRE it's a historic happenstance that has proven useful. Events raised asynchronously never have responses. For events raised synchronously, the response is most useful as a way to ensure the event was received and processed. But the response can have real utility as well.

Historically, KRE returned JavaScript as the result of executing rules. That has been expanded so that the result can be JSON or other correctly mime-typed content. This presents challenges for the engine since rules could be written by many different developers and yet there can be only one result type.

Presently the engine handles this by assuming that any response to an event with the domain web will be JavaScript and otherwise be a directive document (JSON with a specific schema). This suits for many purposes, but doesn't admit raw responses such as images or even just a JSON doc that isn't a directive. The engine tries to put a correctly formatted response together as best it can, but more work is needed, especially in handling raw responses.

This isn't usually a problem because the semantics of a particular event usually imply a specific kind of response (much as we've determined up front that JavaScript is the correct response for events with a web domain). Over time, I expect more and more events will be raised asynchronously and the response document will become less important.

Waiting...Again

Once the response has been returned, the pico waits for another event.

Conclusion

For simple rulesets, a programmer can largely ignore what's happening under the covers and the preceding discussion is overkill. But applications that consist of multiple rulesets, complex pico structures, or many explicit events can have asynchronous interactions that the developer must understand to avoid pitfalls like race conditions.

If you'd like to add other transports to KRE, we welcome the help. KRE is an open source project hosted on Github. We're happy to help you get started.

The event evaluation cycle described above presents programmers with a flexible, programmable, cloud-based way to respond to events. Explicit events add significant power by allowing the rule schedule to be expanded on the fly. The salience graph provides a level of indirection, binding events to rules in a way that is loosely coupled and dynamic. The event loop is the source of much of KRL's power.


Related: I originally introduced the idea of KRE creating an event loop in A Big, Programmable Event Loop in the Cloud. At that time, 2010, the concept of picos and salience graphs was not fully developed. Those were made possible by the introduction of the Sky Event API in 2011.


10-4 Good Buddy! Vehicle to Vehicle Communication

headlight

On Feb 3, a Federal District judge issued an injunction (PDF) that said the First Amendment gives people the right to communicate on the road. The ruling was in response to a lawsuit against the city of Ellisville, MO for an ordinance that forbid motorists to flash their lights at each other. The city was in the habit of handing out $1000 fines for light flashing. (Reason.com article)

The same day I saw that, I also read Government Wants Cars To Talk To Each Other in Time regarding the Dept. of Transportation's V2V or vehicle-to-vehicle initiatives:

The government agency estimates that vehicle-to-vehicle (v2v) communication could prevent up to 80 percent of accidents that don’t involve drunk drivers or mechanical failure.

The DoT proposal would require all car manufacturers to install v2v communications in cars and other light vehicles. The systems typically feature transponders able to communicate a car’s location, direction and speed at up to 10 times per second to other cars surrounding it, using a dedicated radio spectrum similar to WiFi. The vehicle would then alert its driver to a potential collision. Some systems could automatically slow the car down to avoid an accident.

Will people have access to these V2V systems? Will I be able to use it to send my own messages to the cars around me? Will I be able to control what messages get sent? Not likely.

Instead, the government could use such systems to spy on you or even restrict how and where you drive. There's nothing that would keep such a system from forcing a vehicle to drive the speed limit. Or keep it from entering certain areas. You car simply wouldn't go if you tried to take it somewhere the government hadn't authorized. Of course, the DoT and others will tout the safety benefits, but there are also significant opportunities for the Nanny State to repress or, worse, the Surveillance State to spy. The car, long a symbol of freedom, would become a mere means of getting from point A to point B, so long as point B is in the list of authorized destinations for you.

While such DoT-mandated V2V are likely inevitable, we can work to ensure the protect driver privacy and restrict their use to curtail freedom of movement. There's also nothing to prevent us from creating our own V2V systems that send the messages we want to send. Our goal with Fuse is to give your vehicle a voice that speaks to and for you. You control what your vehicle says and who is says it to. No reason it couldn't be speaking to other cars. Think of it like CB radio for the 21st century.


Using jQuery Mobile and Backbone

Bicycle & Jogger with child ride along the bike path

I've been interested in using Backbone as part of my jQuery apps for a while. I finally got around to playing with it. I found a good set of tutorials by Steve Smith that builds a simple activity tracker.

Good, that is, except that there's something wrong with the formatting so you can't actually see the code snippets and some parts just seem to be missing altogether. A little hard to read. The saving grace is that Steve created a Github repo of the whole thing. What's especially cool is that each of the transformations outlined in his four blog posts is a different branch. Consequently you can check the whole thing out and experiment with each by merely checking out a different branch and reloading.

I wanted to use this model but jQuery, jQuery Mobile, and Backbone.js have all been updated since the example was written. Before using the tutorial as the basis for something I needed to make sure it all still worked with the latest versions. So, I forked the repo and added a new branch, update-to-jqm14-bb11, and made it compliant with the new versions. If you clone my repo and check out that branch, you'll have a simple activities demo that uses the latest code.


Auto Industry is Ground Zero in Technology Disruption

Chris Dixon and Marc Andreessen with Eric Ries talk about startups in this video from the Lean Startup Conference last December. They get into connected cars about 9 minutes in. They get into a discussion of industries being disrupted by technology. Marc calls the auto industry "ground zero" in this disruption cycle. Well worth watching.


Complex Pico Structures: Guard Tours

Mall Security

We have recently been working on a project for a client that implements a guard tour patrol system. From Wikipedia:

A guard tour patrol system is a system for logging the rounds of employees in a variety of situations such as security guards patrolling property, technicians monitoring climate-controlled environments, and correctional officers checking prisoner living areas. It helps ensure that the employee makes his or her appointed rounds at the correct intervals and can offer a record for legal or insurance reasons.
From Guard tour patrol system - Wikipedia, the free encyclopedia
Referenced Fri Jan 17 2014 15:46:17 GMT-0700 (MST)

You could imagine implementing such a system in a variety of ways, but I saw it as an opportunity to experiment with picos, CloudOS, and the Personal Cloud Application Architectures, extending our techniques and learning important lessons. If you squint a little, you can see that this system and SquareTag are quite similar. In fact, the personal cloud and pico infrastructure are identical.

Design

The fundamental use case of a guard tour system is a guard going from place to place, checking-in at each location to record the state of each location and reporting any anomalies. At its simplest, a tour is an ordered list of locations. The system also has to know about guards, managers, and reports.

We modeled these by creating a prototype pico for each of these objects. At present, KRL has no facility for formally managing prototypes, so we simply write initialization functions that we run in a pico after it's created depending on which type of pico we want to create. There is an Institution pico that is the "root" object of an entire guard tour. Creating prototypes allows us to easily create a new guard tour system using an initialization ruleset.

The other type of pico that might not be apparently obvious is an index pico for tours, locations, and reports. The index pico is necessary to map identifiers for each picos to an event channel identifier (ECI) as well as providing various query capabilities. The tour, location, and report identifiers are URLs and are, consequently, globally unique. Institution picos and the various index picos are all singletons, meaning that in any given Guard Tour set up, there is only one of each.

When a guard tour has been initialized, locations have been entered, and tours created, it might look like this:

guard tour pico relationships

Guards and managers have a "Guard Tour ruleset" installed in their personal cloud. In every other respect, these are just ordinary personal clouds like the ones we create for SquareTag. This is a critical idea: guards and managers don't get an account in the guard tour system. In fact the Guard Tour system has no notion of accounts. Instead, guards and managers have their own personal cloud that they might be using for other things—like running Fuse. This might be a personal cloud they use exclusively for work or they might mingle other components of their life in it—just like people do with laptops and other personal computing devices.

Each guard or manager has a subscription to the Institution pico representing the guard tour. The Guard Tour ruleset manages this subscription and knows what to do with it. The Guard Tour app provides an event-query API for use by the unhosted Guard Tour Web application that provides guards and managers with a user interface for interacting with tours.

The following diagram, from my blog post on the event-query API, shows the overall structure of the application except that a guard tour isn't a single pico, like the one shown here, but rather a constellation of picos like the one shown above. Conceptually, however, it looks just like this since the API provided by the ruleset installed in the guard or manager's personal cloud provides a proxy for the rest of the system. The Web application doesn't know that the other picos exist.

event-query model

The Web application is a pretty straightforward jQuery Mobile application. We built a JavaScript library on top of the CloudOS.js library that provides a Guard Tour-specific interface for the Web application to interface with the event-auery API presented by the Guard Tour ruleset. As for the code, the KRL that runs in the various picos represents about 25% of the overall code base. There's more than twice as much JavaScript (55%) as KRL.

Walking a Tour

When a guard, call him "Frank," starts a tour, he goes to the Web application and authorizes it to access his personal cloud (shown in purple in the first diagram). As stated, the Guard Tour ruleset provides the event-query API that the Web application needs to function. The Guard Tour API presented by the guard's personal cloud acts as an interface for the API presented by the Institution pico. Of course, the Guard Tour API need not be identical to that provided by the Institution pico. The ruleset for a guard, for example, only exposes the funciontality a guard needs; whereas the ruleset for a manager includes facilities for creating new locations and tours, updating them, and so on.

The Institution pico has subscriptions to the tour, location, and report index picos. When Frank does a search for tours that he's been assigned, the Web app asks the API presented by his personal cloud to find locations using specific criteria, which in turn asks the institution for those tours, which further pushes the request to the Tour Index pico. That pico handles the query and returns the list of appropriate tours to show Frank.

When Frank clicks on a tour to start walking it, the process is repeated, only this time, the Tour index pico is used to decode the tour identifier to the ECI for the selected Tour pico (LUFX in the diagram). Queries to that pico return the list of locations, which include ECIs to specific Location picos. As Frank manipulates the user interface, his personal cloud uses temporary channels (shown in green in the diagram) to the tour he's walking, various location picos, and a report pico created just for this interaction. When Frank's done, the report pico is completed and his personal cloud forgets all the temporary channels.

Lessons Learned

That may sound complicated, but it's surprisingly quick and effective. We could have just stored a tour's locations as data inside the tour pico. For that matter, we could store tours as data inside the Institution pico. By representing each as a separate pico, we maximize our flexibility and allow for reuse. For example, a location can be in more than one tour. More importantly, I imagine the day when each room in a building might be represented by an online avatar. One function of that room avatar would be to provide data for a guard tour. But it might also know about the room's structure, features, emergency access capabilities, maintenance, and so on.

Similarly, we could of just created accounts in the Institution pico and had it offer a OAuth-protected API directory, but I believe strongly in the idea of personal clouds and saw this as a way to experiment with how they can be used in conjunction with unhosted Web applications and a complex pico constellation to provide specific functionality. For example, Frank the guard might want his personal cloud to record the fact that he walked particular tours on particular days.

Overall, we've learned a lot from this project and I'm grateful we got the chance to do it. Here are a few of the things we've learned:

  • Building complex pico structures with many members that are also fast and responsive
  • Using personal clouds as the basis for interacting with complicated, pico-based applications
  • Techniques for testing
  • Using index picos and temporary channels

Many of these techniques will be useful to us as build the Fuse API and application. That's next on our plate. Eventually I hope to redo SquareTag in this style, making it an application that could be used by people with personal clouds that are hosted on other systems so long as they were based on CloudOS. There is much to recommend picos, event-query APIs, personal clouds, and unhosted Web apps as a programming style: it's readily distributable and protects personal data. I'm excited by the progress we've made.


Life Simplified with Connected Devices

Kelly Flanagan's lab at BYU did a concept video to show how connected devices simplify life. My son Bradford did the screen writing. Check it out:

I think these guys did a good view of presenting an important concept: the best interface is no interface. The system gives them data when appropriate and takes cues from the actions they take, but for the most part is invisible, working behind the scenes to make people's lives simpler.


Bitcoin isn’t Money—It’s the Internet of Money

Bitcoin Art

If you've ever wondered why everyone's making such a fuss about Bitcoin, this is worth reading: Bitcoin isn't Money--It's the Internet of Money. Here's the money quote (no pun intended):

Bitcoin is a new transport layer for finance that allows decentralized, disruptive, permissionless development of applications on a separate layer. It has the capability to do for finance what the Internet did for communication.

People have a hard time understanding decentralized systems and they don't always get the power of a platform. Bitcoin and other cryptocurrencies are both.