Heroku for the Enterprise

Firstly, I love Heroku!

It's my personal "go-to" platform for development and I have even deployed a number of enterprise applications on the service with great success.

Here's the problem, PaaS has finally gone mainstream, resulting in an increasingly competitive market, with many services now focused on enterprise organisations.

For example, I've spent the last couple of months investigating Pivotal Cloud Foundry and RedHat OpenShift. These are two Polyglot PaaS environments that have a lot of overlap with Heroku. In fact, in the case of Cloud Foundry, they even leverage some of the same components (e.g. Buildpacks, created by Heroku).

Both Cloud Foundry and OpenShift have gained good market momentum, with Cloud Foundry reporting the fastest first-year sales growth for an open-source project ever. They also have well established links into the enterprise, with RedHat building on their strong deployment of RedHat Enterprise Linux and Pivotal with their connections to EMC, VMware, etc.

These services also offer a suite of enterprise focused features, such as the ability to deploy on top of multiple infrastructure stacks (covering on-premise and in the cloud), as well as future support for Docker, something that RedHat is taking very seriously with OpenShift v3.0.

So where does this leave Heroku for the Enterprise?

If I was an enterprise looking for a Polyglot PaaS, why would I pick Heroku? On the surface, I can get every feature of Heroku from Cloud Foundry or OpenShift, whilst at the same time having the flexibility to deploy my own PaaS instance on almost any infrastructure stack (even behind my own firewall).

This is made worse by the fact that Heroku have not been particularly forthcoming regarding their future roadmap. They've done some good work their security model and continue to expand their trust story (e.g. Safe Harbour, etc), but what about the rumoured VPC or future Docker support? When compared to Pivotal and RedHat, the difference in night and day, as both companies have a clear roadmap (e.g. Cloud Foundry Diego and OpenShift v3.0).

Can Heroku Conquer the Enterprise?

In my opinion, for Heroku to successfully compete for the Enterprise, they need to take advantage of their unique selling point... Force.com.

Since the acquisition by Salesforce.com in 2010, I feel like Heroku has lost it's focus and momentum, whilst at the same time failed to capitalise on the advantages of the broader Salesforce.com eco-system.

As a result, if I was CEO for the day, I would make Heroku part of every Force.com platform license. For example, if you purchased a Force.com App License, it should automatically come with monthly Heroku dyno capacity (similar to what Microsoft position with O365 and Azure).

This approach would encourage all Force.com customers (which includes a lot of enterprise organisations) to use Heroku, instead of looking elsewhere.

In addition to the bundled licensing, I would make services such as "Heroku Connect" completely free for Force.com customers, allowing developers to easily synchronise data between the two platforms, without any limitations. This should also include Force.com API limits, which Heroku Connect should be exempt from as both platforms are owned by Salesforce.com.

If Heroku was positioned in this way, it would suddenly become a very interesting proposition for any Force.com customer, making it very difficult to ignore when positioning a Polyglot PaaS capability. It would also act as a clear differentiator to Cloud Foundry and OpenShift, giving Heroku a much needed unique selling point.

Conclusion

I believe Heroku must act now if they want to remain relevant to enterprise customers. This means, Salesforce.com must remove all barriers for their existing customers and drive clear synergy between Heroku and Force.com, as well as their broader eco-system (e.g. Exact Target, etc).

Without this proactive strategy, I fear Heroku will remain a niche service for start-ups, never fully realising its potential.

Installing Rails on OS X

I recently started teaching myself Ruby on Rails. So far I'm enjoying the experience and find that Rails is living up to the hype as a fast and easy to use framework.

One pain point was getting Rails installed and configured on my local development machine (MacBook Pro). Although the process is relativity simple (when you know how) it is prone to error that can result in a lot of frustration (especially when you just want to start coding).

Below is a short five step process to installing and configuring Rails on OS X 10.10.

Step 01:

Install Xcode from the Mac AppStore. Xcode is an integrated development environment (IDE) containing a suite of software development tools developed by Apple. It's fairly big and therefore can take some time to install.

Step 02: 

Install Homebrew, which is an open source software package management system that simplifies the installation of software on OS X.

To install simply open "Terminal" and run the following command:

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

You can ensure your installation is up to date by running the following terminal command:

brew doctor

Step 03:

Install Ruby Version Manager (RVM) and Ruby. 

Run the following terminal command:

\curl -L https://get.rvm.io | bash -s stable

Once the RVM installation is complete, close and reopen terminal. Now run the following terminal command:

type rvm | head -n 1

To install Ruby, run the following terminal command:

rvm use ruby --install --default

Step 04:

Install Xcode Command Line Tools by running the following terminal command:

xcode-select --install

When prompted, click install.

Step 05:

Install Rails by running the following terminal command:

gem install rails --no-ri --no-rdoc

That's it! You should now be able to create a new rails app and access it via your local host.

Dreamforce 2014 - Introducing Lightning

I recently attended and spoke at Dreamforce 2014, Salesforce.com's (SFDC) premier conference.

I’ve tried a number of times to describe Dreamforce to people who haven’t been before, but to be honest, there are no words that fully cover the experience. The Dreamforce website states: 

“Dreamforce is four high-energy days of innovation, fun, and giving back. It’s your chance to learn from industry visionaries, product experts, and world leaders who can help you transform your business and your life.”

Unfortunately this doesn’t come close to describing the event, which this year saw more than 150,000 people attend from all over the world. 

My best effort would be:

“Dreamforce is the biggest conference / festival you will ever attend. The entire event feels like it was designed by Walt Disney, with the marketing polish of Apple! Between the 2000 sessions,  you’ll also find a music festival, all night meet-ups and a 1million dollar hackathon. It’s a techies dream!”

With that said and to quote Morpheus, “Unfortunately, no one can be told what the Matrix (*Dreamforce*) is. You have to see it for yourself.”

Due to the scale of Dreamforce it would be impossible to cover all of the announcements in a single blog post. As a result I’ve decided to focus on what I believe to be the most important announcement.

Saleforce1 Lightning

Salesforce1 Lightning is not new, in fact SFDC have been working on it for years. We got our first official glimpse of Lightning last year at Dreamforce with the release of the Salesforce1 Mobile App. This application was built using Lightning, or as it was known then “Aura”.

What is Lightning?

Lightning is the next major release of the Salesforce1 platform (AKA Force.com) and in my opinion is the biggest change since SFDC first introduced the ability to create custom objects.

SFDC have released an overview video which highlights the Salesforce1 Platform with Lightning:

Why is Lightning so important?

It’s clear the future will be device agnostic, meaning users will be accessing applications and services across a magnitude of different end points. This could be anything from tablets and smartphones, to wearable devices like the Apple Watch, as well as other paradigms that haven’t been introduced yet.

The current Salesforce1 user interface is geared towards the browser and I think everyone would agree, has a certain “90’s look”. It also doesn’t consistently optimise for other screen sizes, requiring a complete user interface shift, like with the Salesforce1 Mobile App. This results in a confusing user experience that causes SFDC and developers pain, as they still have to individually think about desktop and mobile when developing complex applications.

With Lightning, SFDC have delivered a new mobile-optimised, modular user interface, which delivers a consistent experience across all devices and enables rapid application development through re-use.

How does Lightning work?

The easiest way to explain Lightning is to focus on two parts, the "Lightning Framework" and "Lightning Components". Essentially, Lightning applications are built using the Lightning Framework and comprise of Lightning Components.

Lightning Framework:

I’ve heard some people describe Lightning as “just another JavaScript framework”. However, it’s really more than that. 

The Lightning framework supports partitioned multi-tier component development that bridges the client and server. It uses JavaScript on the client side and Apex on the server side. This means that developers gain access to a stateful client and a stateless server architecture. 

For example, JavaScript is leveraged on the client side to manage UI component metadata and application data. The framework then uses JSON to exchange data between the client and the server.

This architecture helps drive great performance by intelligently utilising the device (client), server and network, enabling developers to focus on the logic and interactions of their app.

Lightning Components:

Every Lightning application is made up of Lightning Components. Each component is a self-contained, re-usable unit, that can range from a simple line of text to a fully functioning capability. Components interact with their environment by listening to or publishing events.

SFDC have already created a number of  prebuilt Lightning Components (e.g. Chatter feed, search bar, charts, etc.) which can be used as part of app development. You can also expect partners to build Lightning Components and make them available on the AppExchange.

Developers can use or expand the prebuilt components, as well as build their own custom components. Any component can then be re-used across different applications on the Salesforce1 platform.

The goal is for developers to build components instead of apps, enabling speed to value through re-use, as well as guaranteeing each component will be fast, secure and device agnostic, thanks to the Lightning Framework.

What about Declarative Development?

One of the great aspects of the Salesforce1 platform, is the ability to develop declaratively (click’s, not code). This opens up app development to non-developers (e.g. business users), which SFDC call “Citizen Developers”.

As part of the move to Lightning, SFDC have created the Lightning App Builder. Put simply, this is a GUI driven process to “compose applications” from Lighting Components.

The Lightning App Builder will enable non-developers to build even better applications, quicker, through a drop and drag interface. For example, if you want a Chatter Feed in your app, simply drag one in. Everything else (e.g. performance, scale, security) is taken care of by the Salesforce1 platform and thanks to the Lightning Framework, the components will optmise perfectly for any device.

The video below shows the Lightning App Builder in action:

Conclusion

I believe Lightning is a game changer for the Saleforce1 platform. I can see a huge opportunity for developers as they shift from creating standalone apps, to re-usable Lightning Components.

This is the potential I have always seen in the Salesforce1 platform, where app development is no longer a time consuming, costly process, but instead fast and efficient. For the first time, this is achievable through a building block approach, that doesn’t sacrifice quality, performance or security.

I fully expect to see a lot of energy around Lightning over the next twelve months as SFDC continue to make more of the capabilities available to the community. There will also be a judgment day for the current browser user interface, when SFDC will enable Lightning across the entire platform (my guess is Dreamforce 2015).

Overall I think Lightning will change the way apps are built and I’m excited to see what developers (including citizen developers) do with the new capabilities!

Developing on Force.com

The Force.com PaaS provides an enormous amount of functionality and flexibility, all of which is driven by the underlying metadata architecture.

There are a number of different ways to develop applications on Force.com, ranging from declarative development (click’s, not code) to APEX, which is an object orientated programming language similar to Java.

Depending on your use case you may only need to leverage the declarative development, however for most mid-sized builds the 80 / 20 rule can be applied (80% Click's / 20% Code).

Regardless of your development, there are number of developer good practices and tools that will help set you up for success. This is especially important if you are developing for a shared org (see "Force.com Org Strategy").

This article will outline some of the good practices and tools that I recommend:

Integrated Development Environment (IDE)

An IDE is a software application that provides facilities to programmers for software development. This includes a source code editor, build automation tools, debugger and (if you’re lucky) code completion features.

With Force.com you have two IDE options:

  1. Eclipse, which can almost be considered the industry standard. It’s cross platform (Windows, Mac, Linux), has a rich community and supports thousands of plugins for many programming languages. Salesforce.com have an Eclipse plugin specifically for Force.com development, which can be downloaded for free from “developer.force.com”. They also have comprehensive reference library for beginners and experienced developers
  2. MavensMate (my choice) is an open source Force.com IDE developed by Mavens. This is quickly becoming the community standard for Force.com development due to its integration with the popular text editor Sublime Text. Salesforce.com themselves also point people towards MavensMate as an alternative to Eclipse.

If you are developing a mid-complexity application (APEX, Visualforce, etc.) I would certainly recommend the use of an IDE.

Source Code Management (SCM)

A common misunderstanding when developing on Force.com is that you don’t need source code management. The reason that I often hear is that “I have no code, it’s all declarative”.

The important thing to remember is that Force.com is a metadata driven architecture, therefore metadata describes the data structures in your environment and the declarative functionality implemented on the platform (e.g. your application).

As a result it’s still import to use source code management for version control, even if it’s just tracking the metadata changes.

Personally I always recommend Git, which is a distributed source code management system with an emphasis on speed and data integrity. There are plenty of Git services available, however the most popular are GitHub and BitBucket. It’s here where we store all of the source code (e.g. metadata) for all of the projects that are developing for Force.com (just remember to make your repositories private).

Continuous Integration (CI)

Continuous integration is the practice of merging all developer code with a shared mainline on a regular cadence. It enables automated testing and reporting on isolated changes in a larger code base, allowing developers to rapidly find and solve defects.

As a result, continuous integration facilitates the process of agile development, where you are constantly testing your code, ensuing that it doesn’t break the build (small and often).

The industry standard for continues integration is Jenkins, which is an open-source tool. You can host your own Jenkins instance, but I would recommend a cloud-based service such as CloudBees.

Continuous integration will help facilitate the testing process when moving code between environments (DEV, TEST, PRD, etc.)

Development Environments

Every production Force.com environment comes with a suite of development sandboxes. These sandboxes can be created ad-hoc and are a direct replica of production (however do not include any data). Test data will need to be loaded post sandbox creation, either manually or via an integration (e.g. MuleSoft, etc.)

When provisioning sandboxes to development teams I recommend the following approach, which includes three sandboxes:

DEV = Main development environment.
CI = Continues integration merge / build test environment.
TEST = Formal user testing environment.

A traditional development pattern would be...

  1. All development will occur in the DEV environment, leveraging an IDE (e.g. MavensMate) and source code management (e.g. GitHub).
  2. The project teams will then build their code (multiple times per day) into the CI environment leveraging continues integration (CloudBees). This will confirm that their development does not break the build.
  3. Finally, any code positioned for a release to the GSO will be moved into the TEST environment (leveraging CloudBees), where formal testing (including UAT) can occur.

This approach ensures the entire development process can be owed and managed by the development team, offering complete autonomy.

Conclusion

In summary, Force.com is an amazingly flexible development PaaS, which is extended further if you include Heroku (I’ll save that for another time).

Hopefully this information is useful and as always, please don’t hesitate to comment below if you have any questions.

Docker - Containerisation is the new Virtualisation

I'm a huge advocate of Platform as a Service (PaaS), specifically Heroku.

Heroku enables developers to forget about the infrastructure and middleware, allowing them to focus on their application (simply push code and let the platform do the rest).

The "secret source" of Heroku is the understanding that web apps, databases and worker jobs are just Unix processes and Unix doesn't care about the stack. It's this philosophy that enables Heroku to be cross-language, by focusing on Unix processes and producing environments (via Buildpacks) that can run any server process.

This is where containers come in! Heroku uses lightweight containers called Dynos, which run a single user-specified command. With containers you can very quickly and efficiently run thousands of services on a single virtual machine, each thinking they have their own system.

This is all great and it's a core part of why I love Heroku. However, it's what happens when you expand on this concept that things become really interesting...

Introducing Docker

Docker is an open-source project to easily create lightweight, self-sufficient containers from any application, that will run anywhere.

Let's break that down:

Open Source - Although Docker was created by a commercial company (dotCloud), it's open-source and has a thriving community.

Lightweight - Containers are insanely fast, providing bare-metal access to the hardware. No need to worry about a hypervisor layer.

Self-sufficient - Each container comprises of just the application and its dependencies. It runs as an isolated process in userspace on the host operating system, sharing the kernel with other containers.

Application - Containers package applications, not machines (making it application centric). Unlike traditional virtual machines a Docker container does not include a separate operating system.

Run Anywhere - Run on any machine, with guaranteed consistency, for example: local (OS X, Linux, Windows), Data Centre (Red Hat, etc.) and Cloud Infrastructure (AWS EC2, Rackspace, etc.)

The primary difference between a traditional Virtual Machine stack and a Docker stack, is that the Docker Engine container includes just the application and its dependencies. 

Virtual Machine Stack VS. Docker Stack

Why Docker?

With Docker, developers can build any application in any language using any toolchain. Just like shipping containers, “Dockerised” apps are completely portable and can be loaded anywhere. This provides developers complete flexibility and consistency, when developing applications.

Docker also has an impressive community offering, with over 13,000+ images available on Docker Hub. This enables rapid application development, through the use of pre-built capabilities.

However, Docker is not just great for developers, system admins can use Docker to provide standardised environments for their development, QA, and production teams, removing the challenges of ensuring consistency across different environments.

Get Started

I plan to post a lot more regarding Docker, but the easiest way to learn is to experience it for yourself. I suggest you head over to the official Docker Installation guide and look-up the instructions for your system, you can then grab an image from the Docker Hub (for example, Ghost, WordPress, PHP, etc.) and start playing.

I've already got a number of web application containers up and running across OS X and Ubuntu and I have set-up my own portable "Heroku-like" PaaS using Dokku, all powered by Docker!