OpenStack Kilo: Build Hyper-Scale Applications Quickly And Easily

Use of open source is soaring in the enterprise and especially in the datacentre. The likes of Google, eBay and Facebook run their global web-scale systems on open source software (OSS), and Linux is now the foundation for 75% of enterprise systems, according to a report from the Linux Foundation –

There are good business reasons for this. With many eyes on the source code, OSS is more secure. The code can be tested and bugs spotted by thousands both inside and outside the company, and so quickly fixed. If you are unsure about a codebase’s vulnerability, its openness allows you to check.

By the same token, developers working on OSS code influence its direction, improve its quality, and can make it deliver what they want. Its openness ensures that there is no vendor lock-in. Other advantages include auditability, interoperability as a consequence of adherence to standards, and a range of support options. In short, OSS is and has for a long time been a highly viable contender for mission-critical enterprise deployments.

OpenStack KiloOpenStack logo

A case in point is OpenStack, an operating system for the cloud that allows cloud services users to manage and control resources – such as compute, networking and storage – using a web-based dashboard, and backed by a rich set of features that allow you to build hyper-scale web applications quickly and easily.

OpenStack has just seen the release of its latest version, Kilo, the 11th release. Kilo offers nearly 400 new features to support software development, big data analysis and application infrastructure at scale. As a testament to its OSS roots, 1,492 individuals employed by more than 169 organisations contributed to the Kilo release, whose focus is on interoperability in the market, raising the bar for driver compatibility, and extending the platform to fit workloads with both bare metal and containers.

Among the many new features is the Ironic bare-metal service, for provisioning workloads that require direct access to hardware. It supports existing VM workloads and adoption of emerging technologies like Linux containers, PaaS and NFV, and allows users to place workloads in the best environment for their performance requirements.

It enables greater cloud interoperability too, as Kilo’s identity federation enhancements work across both public and private clouds to support hybrid workloads in multi-cloud environments.

OpenStack Kilo’s new object storage system, aka Swift, now supports an erasure-code (EC) storage policy type, allowing deployers to achieve very high durability with less raw capacity as used in replicated storage. However, because it is transparent to end users, there is no API difference between replicated storage and EC storage. Container-level temporary URLs now allow a private object storage container to be publicly available for a specified period of time. And Kilo offers improvements to global cluster replication, storage policy metrics and full Chinese translation.

Kilo’s load-balancing-as-a-service API is now in its second version, adding support for NFV, such as port security for OpenVSwitch, VLAN transparency and MTU API extensions.

The StratoGen difference

From DevOps to Global Enterprise Deployment, StratoGen is your one stop shop for OpenStack Cloud Hosting. Our OpenStack Private Cloud solution gives you increased security, control and performance without the burden of designing, deploying and managing your own infrastructure.

StratoGen’s OpenStack Private Cloud is the perfect solution for hyperscalers. Find out how and why it will benefit you and your organisation.


Backup, Archiving & Disaster Recovery – they are not the same!


Backup - Disaster Recovery - Archiving

As a cloud service provider offering data backup for IaaS and Disaster Recovery as a Service, we regularly get involved in conversations which bridge the distinctly different disciplines of Backup, Disaster Recovery and Archiving. They are 3
completely separate things which should never be confused, and whilst technologies in each area can be used to achieve some of the functions of the others, users should not be fooled into thinking that one technology solution will work for all 3 areas unless it is specifically designed to do so, which most are not.

We get asked questions like ‘Can your DR solution retain data for 7 years?’. The answer is, why would you want it to? Long term data retention is ‘archiving’ – you would never want to recover your entire infrastructure to a state from 7 years ago!

We are also asked questions like ‘what is the SLA around restoring my entire environment from backup data?’. The answer is usually ‘Not good, you need a DR solution!’

So, let’s talk about each function and what you should expect from it.

1. Data Backup.

Data backup is intended to recover individual lost or corrupt files, or individual corrupt operating system instances. Typically backups are scheduled, often every 24 hours, sometimes more frequently, even hourly with some continuous data protection solutions. Backup data is typically stored on tape or disk either on or off site. Restore from backup can be a complex and lengthy process depending on the volume of data to be restored. Recovery points (RPO) can be as low as an hour but recovery times (RTO) can extend into days or even weeks for large volumes of data. Examples of backup technologies include Veeam, Symantec Backup Exec and R1 Soft CDP.

2. Disaster Recovery.

Disaster Recovery solutions are designed to replicate entire environments from one location to another, with a view to rapidly re-establishing the entire environment (or a subset of it) in the event of loss of service at the primary location. DR solutions are also capable of replicating the data back in the opposite direction once infrastructure at the primary location is available again. Recovery points (RPO) can be in minutes or seconds as data replication is near continuous, and recovery times (RTO) as low as an hour even for large volumes of data. Examples of Disaster Recovery technologies include Zerto Virtual Replication, VMware Site Recovery Manager and Double Take.

3. Archiving.

Archiving solutions are for long term retention of data, typically for compliance purposes in regulated industries such as finance and legal. Archiving solutions will retain and index all copies and versions of a document, file or email, making them easily and rapidly retrievable, often by approved end users rather than IT admins who typically control backup and DR. Examples of archiving technologies include Symantec Enterprise Vault, Mimecast and Metalogix.

So, in summary, don’t rely on your Backup or DR solution for archiving. Don’t expect to recover a deleted file from your DR solution. Don’t expect your archiving solution to get you back up and running in the event of a disaster! However, we can provide backup and disaster recovery of your archiving solution.


Free Webinar: Build Your Cloud Your Way

Duration: A 45 minute webinar, to help you understand your cloud infrastructure options

Date:  Tuesday 16th September 2014 16:45 GMT, 11:45 ET, 08:45 PT


Karl and Max
Karl Robinson CCO and Max Duncan, VP of Technical Operations, StratoGen

Why should you attend?

• Why a blended infrastructure is the logical future: dedicated private, public and hybrid cloud

• How embracing the cloud will improve performance, scalability and cost

• An example of how StratoGen have managed a transition from in-house to outsourced seamlessly (AIP Publishing)

• How leveraging StratoGen’s managed solution translates into big savings on the bottom line

See it for yourself – The StratoGen Cloud:

• A demo based on VCloud Director – See a virtual datacentre that provides computer networking, storage and security in action

• Discussion of advanced networking and remote access techniques