Enterprise Force.com Architecture - Continuous Integration

Force.com is a great place to build enterprise applications!

In my precious articles, I've outlined the value proposition for the Force.com PaaS, as well as shared my thoughts regarding org strategy and development methodology.

Building on this content, I wanted to share a reference architecture, covering environments, code management and deployments.

This reference architecture is specific for enterprise organisations who are looking to leverage Force.com as an application platform as a service. This means you are looking to simultaneously build multiple applications, across multiple development teams, that will be hosted in a single Salesforce.com production org.

NOTE: Enterprise development can be complex. This post will only provide a summary of the reference architecture, although where possible I will post follow-up articles providing more detail on specific areas.

There are many ways to deploy code to Force.com, however for enterprise organisations I only ever recommend the Force.com Migration Tool (e.g. Ant) and Continuous Integration (CI).

Continuous integration is not unique to Force.com, it's a practice in software engineering, of continuously merging all developer working copies with a shared mainline. This methodology demands a high level of automation, which helps to reduce errors by frequently testing small pieces of effort and providing constant feedback to the developer. 

To help explain the reference architecture, let's start with a diagram:

At the top of the diagram I've highlighted two capabilities:

Git, which is a distributed source code management system with an emphasis on speed and data integrity. Git is used to manage all source code (including meta-data) for all of the projects that are being developed for Force.com. Every project team should have their own private repository, which enables them to manage their code via different branches. There are plenty of Git services available, however the most popular are GitHub and BitBucket.

Jenkins, which is the industry standard for continuous integration. Jenkins automates code deployments across different environments, as well as facilitates the testing process. You can host your own Jenkins instance, but I would recommend a cloud-based service such as CloudBees or CircleCI.

These two capabilities facilitate the entire development process, making the end-to-end application lifecycle management significantly simpler to manage and support.

Next we have the different environments. Every production Force.com org comes with a suite of development sandboxes (the exact number depends on your agreement with Salesforce.com). These sandboxes can be created ad-hoc and are a direct replica of production (however do not include any data). There are also different sizes of sandboxes (e.g. DEV, DEV Pro, etc.)

To ensure you leverage these environments efficiently, the reference architecture is split into two halves.

Developer Owned (Left Side)

On the left side of the diagram, we have the developer tracks, these are fully owned by the development teams, providing complete autonomy. Every development team (which could be working on multiple apps) will be provisioned three sandboxes:

DEV = Main development environment.
CI = Continues integration merge / build test environment.
TEST = Formal user testing environment.

A traditional development pattern would be:

  1. All development will occur in the DEV environment, leveraging an IDE (e.g. Eclipse or MavensMate) and Git.
  2. With every commit to Git, Jenkins will automatically build the code in the CI environment. This will confirm that the development has not broken the build or created any conflicts. In the event of a failure, the development team will immediately be notified (via e-mail or even social tools such as Chatter or Slack).
  3. Finally, any code positioned for a release to production will be moved into the TEST environment where formal testing (including UAT) can occur. This is also managed by Jenkins, but triggered by the development team by selecting a specific tag from Git.

In a perfect world, every developer would have their own DEV sandbox, however, at the time of writing, Salesforce.com only offer a restricted number of DEV Sandboxes to their customers. As a result, developers within a specific development team would share the environments (DEV, CI, TEST). This actually works fine, but obviously requires some level of coordination, which is where development methodologies such as SCRUM Agile are key!

Production Org Owned (Right Side)

On the right side of the diagram, we have another set of environments, specifically:

CI = Used to test / merge code from the different development tracks. 
PRE-PRD = An exact replica of PRD, including the full data set. A final testing environment for the development teams.
PRD = The production environment, where the live users will access the applications.

These environments are used to merge and test all code from the different development tracks, ready for a specific production release. The actual process remains consistent, leveraging Git and Jenkins to automate the deployment between environments.

Production releases can be as frequent as required, however it is common for enterprises to start with monthly releases and progress to weekly as they gain confidence. The good news is that the reference architecture is highly scalable and could easily manage multiple production releases per-day, should this become a requirement.

The two sides of the reference architecture are important as they provide a clear control point for the team that is accountable for the production Force.com org, ensuring that no unapproved code is deployed into production.


Hopefully this information provides a good overview as to how Force.com can be leveraged as an enterprise application platform as a service.

The key is to recognise that the reference architecture provides autonomy for the development teams, whilst providing a consistent control point that protects the production environment. Thanks to the high levels of automation, it also enables high levels of agility, without negatively impacting quality.

Jekyll on Heroku

Jekyll is a fantastic static site generator, allowing you to turn plain text into websites or blogs with just a few clicks!

Static site generators have been growing in popularity over the past year thanks to their speed, simplicity, portability and ease of management. Essentially they allow you to focus on your content and host the output almost anywhere, instead of you having to spend time configuring and maintaining a traditional content management solution (e.g. WordPress, Drupal, etc.).

Thanks to its Ruby roots and open-source nature, I've started to host a number of blogs using Jekyll and have found the experience to be a developers dream.

Arguably the easiest way to host Jekyll is via GitHub Pages, however I have a lot of apps already running on Heroku and therefore favour it as a service.

This post will explain how to deploy your Jekyll site to Heroku in five simple steps:


Before proceeding, I assume you have already installed Jekyll. If not, start by checking out the excellent Jekyll Installation Documentation.

Step 01:

Add a "Gemfile" in the Jekyll project containing:

source 'https://rubygems.org'
ruby '2.1.2'
gem 'jekyll'
gem 'kramdown'
gem 'rack-jekyll', :git => 'https://github.com/adaoraul/rack-jekyll.git'
gem 'rake'
gem 'puma'

Run "Bundle Install".

Step 02:

Create a "Procfile" which tells Heroku how to serve the web site with Puma:

web: bundle exec puma -t 8:32 -w 3 -p $PORT

Step 03:

Create a "Rakefile" which tells Heroku’s slug compiler to build the Jekyll site as part of the assets:precompile Rake task:

namespace :assets do
task :precompile do
puts `bundle exec jekyll build`

Step 04:

Add the following to the "_config.yml" file:

gems: ['kramdown']
exclude: ['config.ru', 'Gemfile', 'Gemfile.lock', 'vendor', 'Procfile', 'Rakefile']

Step 05:

Add a "config.ru" file containing:

require 'rack/jekyll'
require 'yaml'
run Rack::Jekyll.new

That's it! You can now push your code to Heroku like normal.

Heroku for the Enterprise

Firstly, I love Heroku!

It's my personal "go-to" platform for development and I have even deployed a number of enterprise applications on the service with great success.

Here's the problem, PaaS has finally gone mainstream, resulting in an increasingly competitive market, with many services now focused on enterprise organisations.

For example, I've spent the last couple of months investigating Pivotal Cloud Foundry and RedHat OpenShift. These are two Polyglot PaaS environments that have a lot of overlap with Heroku. In fact, in the case of Cloud Foundry, they even leverage some of the same components (e.g. Buildpacks, created by Heroku).

Both Cloud Foundry and OpenShift have gained good market momentum, with Cloud Foundry reporting the fastest first-year sales growth for an open-source project ever. They also have well established links into the enterprise, with RedHat building on their strong deployment of RedHat Enterprise Linux and Pivotal with their connections to EMC, VMware, etc.

These services also offer a suite of enterprise focused features, such as the ability to deploy on top of multiple infrastructure stacks (covering on-premise and in the cloud), as well as future support for Docker, something that RedHat is taking very seriously with OpenShift v3.0.

So where does this leave Heroku for the Enterprise?

If I was an enterprise looking for a Polyglot PaaS, why would I pick Heroku? On the surface, I can get every feature of Heroku from Cloud Foundry or OpenShift, whilst at the same time having the flexibility to deploy my own PaaS instance on almost any infrastructure stack (even behind my own firewall).

This is made worse by the fact that Heroku have not been particularly forthcoming regarding their future roadmap. They've done some good work their security model and continue to expand their trust story (e.g. Safe Harbour, etc), but what about the rumoured VPC or future Docker support? When compared to Pivotal and RedHat, the difference in night and day, as both companies have a clear roadmap (e.g. Cloud Foundry Diego and OpenShift v3.0).

Can Heroku Conquer the Enterprise?

In my opinion, for Heroku to successfully compete for the Enterprise, they need to take advantage of their unique selling point... Force.com.

Since the acquisition by Salesforce.com in 2010, I feel like Heroku has lost it's focus and momentum, whilst at the same time failed to capitalise on the advantages of the broader Salesforce.com eco-system.

As a result, if I was CEO for the day, I would make Heroku part of every Force.com platform license. For example, if you purchased a Force.com App License, it should automatically come with monthly Heroku dyno capacity (similar to what Microsoft position with O365 and Azure).

This approach would encourage all Force.com customers (which includes a lot of enterprise organisations) to use Heroku, instead of looking elsewhere.

In addition to the bundled licensing, I would make services such as "Heroku Connect" completely free for Force.com customers, allowing developers to easily synchronise data between the two platforms, without any limitations. This should also include Force.com API limits, which Heroku Connect should be exempt from as both platforms are owned by Salesforce.com.

If Heroku was positioned in this way, it would suddenly become a very interesting proposition for any Force.com customer, making it very difficult to ignore when positioning a Polyglot PaaS capability. It would also act as a clear differentiator to Cloud Foundry and OpenShift, giving Heroku a much needed unique selling point.


I believe Heroku must act now if they want to remain relevant to enterprise customers. This means, Salesforce.com must remove all barriers for their existing customers and drive clear synergy between Heroku and Force.com, as well as their broader eco-system (e.g. Exact Target, etc).

Without this proactive strategy, I fear Heroku will remain a niche service for start-ups, never fully realising its potential.

Installing Rails on OS X

I recently started teaching myself Ruby on Rails. So far I'm enjoying the experience and find that Rails is living up to the hype as a fast and easy to use framework.

One pain point was getting Rails installed and configured on my local development machine (MacBook Pro). Although the process is relativity simple (when you know how) it is prone to error that can result in a lot of frustration (especially when you just want to start coding).

Below is a short five step process to installing and configuring Rails on OS X 10.10.

Step 01:

Install Xcode from the Mac AppStore. Xcode is an integrated development environment (IDE) containing a suite of software development tools developed by Apple. It's fairly big and therefore can take some time to install.

Step 02: 

Install Homebrew, which is an open source software package management system that simplifies the installation of software on OS X.

To install simply open "Terminal" and run the following command:

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

You can ensure your installation is up to date by running the following terminal command:

brew doctor

Step 03:

Install Ruby Version Manager (RVM) and Ruby. 

Run the following terminal command:

\curl -L https://get.rvm.io | bash -s stable

Once the RVM installation is complete, close and reopen terminal. Now run the following terminal command:

type rvm | head -n 1

To install Ruby, run the following terminal command:

rvm use ruby --install --default

Step 04:

Install Xcode Command Line Tools by running the following terminal command:

xcode-select --install

When prompted, click install.

Step 05:

Install Rails by running the following terminal command:

gem install rails --no-ri --no-rdoc

That's it! You should now be able to create a new rails app and access it via your local host.

Dreamforce 2014 - Introducing Lightning

I recently attended and spoke at Dreamforce 2014, Salesforce.com's (SFDC) premier conference.

I’ve tried a number of times to describe Dreamforce to people who haven’t been before, but to be honest, there are no words that fully cover the experience. The Dreamforce website states: 

“Dreamforce is four high-energy days of innovation, fun, and giving back. It’s your chance to learn from industry visionaries, product experts, and world leaders who can help you transform your business and your life.”

Unfortunately this doesn’t come close to describing the event, which this year saw more than 150,000 people attend from all over the world. 

My best effort would be:

“Dreamforce is the biggest conference / festival you will ever attend. The entire event feels like it was designed by Walt Disney, with the marketing polish of Apple! Between the 2000 sessions,  you’ll also find a music festival, all night meet-ups and a 1million dollar hackathon. It’s a techies dream!”

With that said and to quote Morpheus, “Unfortunately, no one can be told what the Matrix (*Dreamforce*) is. You have to see it for yourself.”

Due to the scale of Dreamforce it would be impossible to cover all of the announcements in a single blog post. As a result I’ve decided to focus on what I believe to be the most important announcement.

Saleforce1 Lightning

Salesforce1 Lightning is not new, in fact SFDC have been working on it for years. We got our first official glimpse of Lightning last year at Dreamforce with the release of the Salesforce1 Mobile App. This application was built using Lightning, or as it was known then “Aura”.

What is Lightning?

Lightning is the next major release of the Salesforce1 platform (AKA Force.com) and in my opinion is the biggest change since SFDC first introduced the ability to create custom objects.

SFDC have released an overview video which highlights the Salesforce1 Platform with Lightning:

Why is Lightning so important?

It’s clear the future will be device agnostic, meaning users will be accessing applications and services across a magnitude of different end points. This could be anything from tablets and smartphones, to wearable devices like the Apple Watch, as well as other paradigms that haven’t been introduced yet.

The current Salesforce1 user interface is geared towards the browser and I think everyone would agree, has a certain “90’s look”. It also doesn’t consistently optimise for other screen sizes, requiring a complete user interface shift, like with the Salesforce1 Mobile App. This results in a confusing user experience that causes SFDC and developers pain, as they still have to individually think about desktop and mobile when developing complex applications.

With Lightning, SFDC have delivered a new mobile-optimised, modular user interface, which delivers a consistent experience across all devices and enables rapid application development through re-use.

How does Lightning work?

The easiest way to explain Lightning is to focus on two parts, the "Lightning Framework" and "Lightning Components". Essentially, Lightning applications are built using the Lightning Framework and comprise of Lightning Components.

Lightning Framework:

I’ve heard some people describe Lightning as “just another JavaScript framework”. However, it’s really more than that. 

The Lightning framework supports partitioned multi-tier component development that bridges the client and server. It uses JavaScript on the client side and Apex on the server side. This means that developers gain access to a stateful client and a stateless server architecture. 

For example, JavaScript is leveraged on the client side to manage UI component metadata and application data. The framework then uses JSON to exchange data between the client and the server.

This architecture helps drive great performance by intelligently utilising the device (client), server and network, enabling developers to focus on the logic and interactions of their app.

Lightning Components:

Every Lightning application is made up of Lightning Components. Each component is a self-contained, re-usable unit, that can range from a simple line of text to a fully functioning capability. Components interact with their environment by listening to or publishing events.

SFDC have already created a number of  prebuilt Lightning Components (e.g. Chatter feed, search bar, charts, etc.) which can be used as part of app development. You can also expect partners to build Lightning Components and make them available on the AppExchange.

Developers can use or expand the prebuilt components, as well as build their own custom components. Any component can then be re-used across different applications on the Salesforce1 platform.

The goal is for developers to build components instead of apps, enabling speed to value through re-use, as well as guaranteeing each component will be fast, secure and device agnostic, thanks to the Lightning Framework.

What about Declarative Development?

One of the great aspects of the Salesforce1 platform, is the ability to develop declaratively (click’s, not code). This opens up app development to non-developers (e.g. business users), which SFDC call “Citizen Developers”.

As part of the move to Lightning, SFDC have created the Lightning App Builder. Put simply, this is a GUI driven process to “compose applications” from Lighting Components.

The Lightning App Builder will enable non-developers to build even better applications, quicker, through a drop and drag interface. For example, if you want a Chatter Feed in your app, simply drag one in. Everything else (e.g. performance, scale, security) is taken care of by the Salesforce1 platform and thanks to the Lightning Framework, the components will optmise perfectly for any device.

The video below shows the Lightning App Builder in action:


I believe Lightning is a game changer for the Saleforce1 platform. I can see a huge opportunity for developers as they shift from creating standalone apps, to re-usable Lightning Components.

This is the potential I have always seen in the Salesforce1 platform, where app development is no longer a time consuming, costly process, but instead fast and efficient. For the first time, this is achievable through a building block approach, that doesn’t sacrifice quality, performance or security.

I fully expect to see a lot of energy around Lightning over the next twelve months as SFDC continue to make more of the capabilities available to the community. There will also be a judgment day for the current browser user interface, when SFDC will enable Lightning across the entire platform (my guess is Dreamforce 2015).

Overall I think Lightning will change the way apps are built and I’m excited to see what developers (including citizen developers) do with the new capabilities!