NEWS // Blog

Blog
Cloud Foundry: One PaaS to Rule Them All

Cloud Foundry: One PaaS to Rule Them All

It’s now official: beginning this summer, Cloud Foundry will be managed by a governance foundation similar in formation to OpenStack.

EMC, IBM, HP, Pivotal, Rackspace, SAP and VMware have all signed on as platinum sponsors to a new Cloud Foundry foundation, committing $1.5M over the next three years to a communal war chest. This alliance squelches any concern that a single vendor defines the fate of the well established PaaS technology. With this announcement, the die has been cast, and Cloud Foundry has squarely taken a front seat in the competitive PaaS landscape and the press really seem to agree.

It is my belief that one of the primary drivers of OpenStack’s evolving success is the velocity that its dedicated engineering community has been able to move the project forward with. Coupling that velocity with an open governance model and associated foundation has helped steer the project amongst various vendor agendas. I believe it is those two elements that have contributed to OpenStack's wide variety of consumption options, options that help drive customer adoption and create a network effect that continues to move OpenStack forward.

Today, Cloud Foundry has over 750 code contributors, on par with the number of contributors to OpenStack’s Grizzly release. Add engineering resources from SAP, Rackspace and HP to the mix, and you can expect that we’ll only see that number skyrocket. Further, the this new governance model provides Cloud Foundry the literal foundation it needs to solidify its positioning as the predominant open source Platform as a Service technology framework.

Ecosystem + Focus

I noted back in October that the combination of an ecosystem combined with extreme focus is incredibly powerful, and that Cloud Foundry’s early success had a lot to do with strong execution on both fronts. With the formation of this official foundation and more importantly, with the addition of HP, Rackspace and SAP to that ecosystem, Cloud Foundry truly has the legs it needs to reach wide spread adoption.

Opening up the Cloud Foundry ecosystem provides support to enable OpenStack to focus on building the best possible IaaS, and enables Cloud Foundry to continue its run as the predominant PaaS offering.

An Open Cloud, from IaaS to PaaS

Much like multi-vendor adoption and deployment of OpenStack enables customers with a consistent and open API from which to base IaaS development upon, widespread Cloud Foundry deployments from multiple vendors will provide a similar benefit. "Write once, run anywhere" has been a long time promise, and yet it has only been within the last year that we've begun to see that come to fruition. The PaaS technology stack unlocks application portability in a way that's historically been unheard of, and provides enterprise software development organizations a platform to make the leap from traditional development methodologies, to modern DevOps implementations. The conditions couldn't be more ripe for widespread adoption of such a stack.

Blue Box is betting on Cloud Foundry

At Blue Box, we’ve been long time Cloud Foundry advocates and strongly believe the combination of OpenStack and Cloud Foundry makes for an incredible 1-2 punch. In fact, we believe the sum of the two are greater than their individual parts. Our OpenStack On-Demand offering has full support for Cloud Foundry out of the box and we have customers actively running Cloud Foundry installations on top of our implementations today. Over the coming months, we'll be excited to share more about how our customers are utilizing both technology stacks.

Congratulations from Blue Box!

Congrats to every one working on Cloud Foundry for today is a great day. The future looks very bright.

Jesse Proudman
Jesse Proudman is the Founder and CEO of Blue Box. Jesse is an entrepreneur with an unbridled passion for Technology and the Internet’s infrastructure. With 16 years of hands on operating experience, Jesse brings vigor for corporate evangelism and product development mixed with an insatiable desire to win. You can find him on Twitter and .

openstack open source cloudfoundry

SHARE

Blog
Vote for Blue Box at OpenStack Summit

Vote for Blue Box at OpenStack Summit

May's OpenStack Summit in Atlanta is just around the corner and Blue Box has three proposed talks. We'd love your support to vote them into the Summit!

Scaling Out OpenStack Clouds in the Enterprise

A panel discussion with:
  • Jesse Proudman, CEO at Blue Box
  • Kenneth Hui, Open Cloud Architect at Rackspace
  • Manju Ramanathpura, CTO - Intelligent Platforms at Hitachi Data Systems
  • Caroline McCrory, Head of Product at Piston Cloud
  • Boris Renski, CMO of Mirantis
  • Jan Mark Holzer, Office of the CTO, RedHat

This Panel discussion will focus specifically on scale-out deployment of OpenStack in the enterprises. The Panel members will discuss their experience deploying and managing scale-out OpenStack data center environments. The panel will also discuss current operational challenges and where there are opportunities for OpenStack to improve.

High Availability and Scaling: The future of OpenStack LBaaS

A presentation by Stephen Balukoff, Principal Technologist.

If all goes well, Icehouse should see the addition of two new major features to the Neutron LBaaS product: Layer 7 routing, and SSL termination. As we look to Juno and beyond, we will undoubtedly need to tackle a couple of the other major features several other high profile load balancing products deliver today: High availability and scaling.

Case Study: Best Practices Learned Along A Leading Gaming Company’s Path to Hosted OpenStack Private Cloud

A presentation by Jesse Proudman, CEO.

With almost 1600 participating developers from more than 165 organizations and a well-developed foundation with strong governance, OpenStack has the momentum and corporate support required to become a ubiquitous cloud-computing platform. Since OpenStack’s first release in October of 2010, the core technology has improved by orders of magnitude. In short, OpenStack is the Linux of cloud computing and it is here to stay. However, even for the most advanced IT teams, deploying OpenStack has proven difficult.

openstack openstack summit openstack foundation conference presentation

SHARE

Blog
OpenStack Interoperability

OpenStack Interoperability

Interoperability is one of the foundational benefits of a technology initiative like OpenStack. Yet historically, certifying interoperability for a project as broad and far-reaching as OpenStack has been daunting.

That’s all about to change…

Today, Mirantis announced a new initiative focused on open certification for third party hardware and software integrations with OpenStack. Backed by more than a dozen vendors including VMware, NetApp, Hitachi and HP, this project will consist of a series of open source tools enabling these providers to self-certify their hardware and software against OpenStack releases and API calls. These tools will be coupled with an online dashboard providing a simple compatibility matrix.

This will be a huge step forward in helping end users whom are ready to adopt OpenStack, particularly those with existing infrastructure they’re looking to capitalize upon.

Most exciting to me, this project reminds me of the RefStack initiative and its goal to provide certified compatibility amongst OpenStack installations. Acknowledging a consistent definition of what an “OpenStack” installation is enables organizations to maintain consistent expectations as they choose OpenStack solutions.

Interoperability helps drives customer adoption and fuel additional hardware and software support from new vendors. Both these initiatives fuel the continued growth and adoption of OpenStack by both today's and tomorrow’s IT Operators. Blue Box is excited to see these projects catalyze further OpenStack adoption.

Jesse Proudman
Jesse Proudman is the Founder and CEO of Blue Box. Jesse is an entrepreneur with an unbridled passion for Technology and the Internet’s infrastructure. With 16 years of hands on operating experience, Jesse brings vigor for corporate evangelism and product development mixed with an insatiable desire to win. You can find him on Twitter and .

openstack private cloud refstack

SHARE

EVENTS
Atlanta OpenStack Summit, Here We Come

Atlanta OpenStack Summit, Here We Come

The Blue Box team is counting down the days until the Atlanta OpenStack Summit, May 12th – 16th, 2014. We look forward to seeing you there!
The OpenStack Summit is a four-day conference for developers, users, and administrators of OpenStack Cloud Software, bringing together the brightest technical minds to discuss the future of cloud computing.

SHARE

Blog
Operations, The Missing Discussion

Operations, The Missing Discussion

At the OpenStack Enterprise Forum event last week, it became apparent that installation of OpenStack has become child's play. But we're missing a discussion about the challenges around long term OpenStack operations.

This event was a panel discussion moderated by Gartner’s Lydia Leong and featured conversation with executives and engineers from PayPal, Nebula, eBay, Internap, Solinea and HP. The panelists represented a wide breadth of OpenStack vendors including enterprise users, public cloud operators and OpenStack consultants, and the discussion was primarily focused on how an Enterprise user would get started bringing OpenStack into their organization. While that's an important discussion, I believe we're not talking about a key component of future OpenStack adoption; a component that has been missing from much of the OpenStack discussion of late.

Ken Pepple, CTO of Solinea and one of the panelists noted:

“We’ve hit a maturity in OpenStack where it’s actually ready. People on the leading edge…have started to be able to go out there and start to use this and bend it to their needs.”

I agree, the core of OpenStack is finally ready for mainstream Enterprise consumption. We’re three years into OpenStack’s evolution, and OpenStack has become easier to install than ever. I expect we’ll start to see additional details emerge over the coming year of an increasing number of organizations and their implementations. Distributions, consultancies, and orchestration technologies have made initial implementations that much easier.

Easier installation is a great enabler – it allows a great number of prospects to experience the power of OpenStack. As such, much of the conversation at OpenStack Summits and at these more targeted events has been centered on “how do I begin?" But there’s an entire component to the conversation that we’re missing: a conversation focused on operations.

Installing an initial environment has become the easiest aspect of adopting OpenStack. Operating OpenStack is arguably much more difficult. And I’m not alone in this belief. These operational challenges are the real barrier to widespread enterprise adoption.

One of the mainstays of cloud services is the notion of reduced operational burden. Salesforce doesn’t expect its customers to operate the underlying Salesforce technology. Amazon Web Services takes care of all the minutia of the hardware, software and networking challenges. Yet, the majority of OpenStack installations today are pushing those operational challenges onto the end user.

What happens in six months when your OpenStack cluster explodes at 3am Sunday morning? How do you go about upgrading your environment without impacting the end users? How do you trace performance problems, or networking issues through multiple layers of sophisticated OpenStack abstraction? With on-premise deployments, how do you determine if you’re seeing hardware problems or if there’s something odd happening with OpenStack?

This same operational challenge has always existed with new software and hardware solutions. The difference, however, is that OpenStack has been designed as a platform to run your entire infrastructure footprint upon. Historically, the failure of a single technology component would have a limited impact on an organization in its entirety. A failure in OpenStack can have the same impact as a failure of an entire data center: catastrophic.

As was discussed at the OpenStack Enterprise Forum, OpenStack talent is truly hard to come by. Existing operations teams within an enterprise aren’t up to speed on OpenStack internals, and those who are work for OpenStack related vendors. This staffing reality isn’t going to change in the near future. Instead, it's something the OpenStack community as a whole needs to spend time on. Much like I advocated in my October blog "OpenStack has One Job, Do it Well", ensuring core services work well and can easily operate is critical to the long term success of this technology.

There are companies focused on this challenge. Companies like Meta Cloud are solving this in an interesting way for on-premise implementations, and Blue Box’s OpenStack Hosted Private Cloud provides an alternative that offloads the hardware requirement from a customer, driving a real cloud experience with the security and control of a private environment.

Regardless, there needs to be more discussion and more focus on the operations challenge. I would suggest that at the OpenStack Summit in Atlanta in May the OpenStack foundation add an Operations track. Let’s broaden the discussion and hear the voices of those out in the field who are living the day to day challenges of running new technology. Let's spread these hard-won experiences to the larger community. Let's make incubation decisions and blueprint approvals based on the operational value of the technology being contributed.

I look forward to seeing the conversation pivot in 2014 from “how do I install” to “how do I run and use OpenStack”. Then, and only then, I believe we will truly begin to see more widespread adoption.

Jesse Proudman
Jesse Proudman is the Founder and CEO of Blue Box. Jesse is an entrepreneur with an unbridled passion for Technology and the Internet’s infrastructure. With 16 years of hands on operating experience, Jesse brings vigor for corporate evangelism and product development mixed with an insatiable desire to win. You can find him on Twitter and .

openstack iaas enterprise oeforum private cloud

SHARE

Blog

Engineers: You’re Doing Marketing Wrong

You know what I’m talking about. Whether it's the side comment after hearing the word “marketing” mentioned or the clear expression of frustration when reading some of their published material, one thing is pretty clear – there is a pretty significant disdain for all things marketing amongst many engineers.

Certainly this does not pertain to all engineers. But in just the last week, I couldn't help but notice how many times I had overheard negative sentiments amongst some of my peers.

How did this happen? I mean, how did this relationship that is presumably rooted in a common purpose (getting useful products into the hands of the consumer) devolved so far? Where does this seemingly irrational distrust come from? Was it the years of products being oversimplified, over-hyped and oversold? Perhaps it's the spam, click tracking, and sometimes creepy analytics used to understand the consumer? Or the constant harping on features that we knew weren't perfect? Whatever the reason, I can’t help to think this vitriolic attitude is probably misplaced.

To be honest, I once saw marketing in a similar light. It was just another department that never quite understood the intricacies or the potential use cases of these enormously complex systems we were building. Yet, at the same time, they were reaching out to the world in order to make these very things known, understood and more so, bought. The result was siloed value proposition evangelism. And it just didn't seem to work. It frustrated marketing. It angered engineering. It thoroughly befuddled the audience.

For years disjointed, interrupt-driven marketing messages have been landing in the laps of consumers. Now that this method is slowly being replaced "inbound" marketing techniques, marketers are learning they can develop more qualified sales leads by creating relevant content that, literally, draws them in. No more bullet-pointed oversimplification that addresses the needs of everyone and anyone. An "inbound" organization will start blogging more, share compelling content (white papers, case studies, use cases and the like), and even engage in social media conversations. The needle moves from glossy handouts at a trade show to creative 140 character statements of what the company stands for. This is movement in the right direction, but I think things needs to go one step further.

Just to level-set, I took a look at what Wikipedia has to say about marketing. They define it as "the process of communicating the value of a product or service to customers, for the purpose of selling that product or service."

This is close, but there is one really, really critical aspect missing. Times have changed and it is my belief that, in high tech, it is no longer good enough to sell just a product or service. Just look around you. PCs are being replaced by mobile devices and some stalwarts of that industry are facing hard times. Social media platforms have started to incorporate other type of services: Facebook acquired Instagram and Twitter latched onto Vine. Google has Mail, Maps, Youtube, and a host of other features. And, of course, Amazon Web Services currently runs a large portion of the Internet.

But wait. These are all companies selling a product or service, right?

Wrong.

In each of these success stories, these organizations are not selling a product or a service. They are selling the ecosystem. Consumers are looking for more than just a device - they want the apps that come with it. Infrastructure as a service is not nearly as interesting without the value-add services and partners surrounding the offering. Arguably, each of these would not be nearly as successful as they are without their respective supporting ecosystems.

If you are not building an ecosystem around your software product, you are doing it wrong.

OK, so back to where we started. Engineers hate marketing. If we can agree that marketing is now much more than just "communicating the value of a product," I think it stands to reason that, now more than ever, engineers play a critical role in communicating the value of their technical ecosystems.

In essence, you, engineers, have become marketers.

It's no longer enough to think that by building a product it will sell itself. We, as engineers, need to do more.

Offer ways for your users to derive value from your technical material. Deliver and document stable APIs. Make it easy for others to interact with your software...including competitors. Champion and sponsor open source tools around your offering. Derive success from your supporting community of users. Speak early and often about the work that you are doing or plan to do. And, above all else, ask your marketers to help you develop and publicize all of these relationships.

This is your opportunity to build more than just a product. You will be building your ecosystem and, more importantly, your personal brand. The rest will fall into place.

Craig Tracey
Craig is a software engineer working on OpenStack for Blue Box's new Hosted Private Cloud offering. He is a contributor to a variety of open source projects in the cloud computing ecosystem and very active in his local OpenStack community. Previous to working at Blue Box he was technical lead for a small team of engineers automating all-things-cloud at HubSpot. In his free time he can be found running, playing ice hockey, or out with friends in his hometown of Boston, Massachusetts.

marketing engineering products

SHARE

RESOURCES

Regular Expressions Spotlight: Groups Pt I

In February of 2013 I wrote a series of posts titled “Using Regular Expressions in Ruby.” The response to this series (and my conference presentation “Beneath the Surface: Regular Expressions in Ruby”) was unexpected and amazing. The Ruby community’s appetite for content on regular expressions is stronger than I ever imagined!

To that end, I’m starting a new series of blog posts called “Regular Expressions Spotlights.” Each of these will take a regular expression concept or component and delve into it, explaining what it is, how it works, and how you can use it in your everyday code. Once you understand these concepts, you’ll wonder how you ever coded without them.

This first post will explore regular expressions groups.

Group Basics

When I craft a regular expression, I sometimes enclose certain parts of the expression in parentheses.

/(Annie|Nell) programs in (Java|Ruby)/

Theses parts are subexpressions within the larger regular expression and are known as capture groups.

When I attempt to match a string to this regular expression, any matches for these groups will be stored in my computer’s memory for later access. Let’s look at an example:

When I run this code:

string = “Nell programs in Ruby”
re = /(Annie|Nell) programs in (Java|Ruby)/
re =~ string

Ruby will return

=> 0

This means there is a match for my regular expression (if there were not a match, it would return nil) and that match begins on character index 0 (the first character) of the string.

(For more information on the =~ matching operator, please see Regular Expressions in Ruby: Part 1)

Now, I not only know I have a match (and where that match begins in my string), but I also can access the matches for my capture groups - my subexpressions within my regular expression - from my computer’s memory with a few global variables.

$1 will return the match for my first capture group.

$1
=> “Nell”

$2 will return the match for my second capture group.

$2
=> “Ruby”

Along with accessing these groups, I can also use them later in my program. For example, I can use them in string interpolation:

“#{$1} loves #{$2}”
=> “Nell loves Ruby”

I can take this even further using the results of these capture groups within my regular expression while the match is still happening. I can use \1 to use the result of the first capture group and \2 to use the second capture group.

/(Annie|Nell) programs in (Java|Ruby).  \1 loves \2/
string = “Nell programs in Ruby.  Nell loves Ruby.”
re = /(Annie|Nell) programs in (Java|Ruby). \1 loves \2/
re =~ string => 0

That \1 will be replaced with whatever the result of the first capture group, in this case with “Nell.” The \2 will be replaced with the result of the second capture group, in this case “Ruby.” Again, when Ruby returns 0, it means there is a match and that it begins on character index 0 of my string.

Now numbers are nice, but Ruby emphasizes readability. Referring to the numbered global variables may technically work, but it’s not very readable. Fortunately, I can name my groups.

Named Groups

The syntax to declare a group is:

?<group_name>

The syntax to invoke - to use- a group is:

\g<group_name>

Our first capture group matches a name (either Annie or Nell), so let’s call it “name.”

(?<name>Annie|Nell)

Our second group matches a programming language (either Java or Ruby), so let’s call it “language.”

(?<language>Java|Ruby)

The first part of my regular expression now looks like this:

(?<name>Annie|Nell) programs in (?<language>Java|Ruby)

Now to use the results of my groups, I invoke them in the second part of my expression.

\g<name> loves to talk about \g<language>

Let’s put these parts together and run the code. This time I’m going to use Ruby’s built in match method:

(For more information on the match method, please see Regular Expressions in Ruby: Part 1)

string = “Nell programs in Ruby.  Nell loves Ruby.”
re = /(?<name> Annie|Nell) programs in (?<language>Java|Ruby). \g<name> loves \g<language>/
my_match = re.match(string)

Ruby will return a match data object which contains both the part of the string which matched the regular expression and the matches for my groups.

=> #<MatchData
"Nell programs in Ruby.  Nell loves Ruby"
name:"Nell"
language:"Ruby">

I can access these captures later in my program like this:

my_match[:name] = “Nell”
my_match[:language] = “Ruby”

Conclusion

That sums up the basics of capture groups. Check back for part 2 of this regex spotlight where I will delve into advanced group concepts including non-capture groups, atomic groups, conditional subexpression groups, and more!

Happy coding!

Nell Shamrell
Nell Shamrell works as a Software Development Engineer for Blue Box. She also sits on the advisory board for the University of Washington Certificate in Ruby Programming. She specializes in Ruby, Rails, and Test Driven Development. Prior to entering the world of software development, she studied and worked in the field of Theatre. The world of Theatre prepared her well for the dynamic world of creating software applications. In both, she strives to create a cohesive and extraordinary experience. In her free time she enjoys practicing the martial art Naginata.

ruby expressions resources code

SHARE

RESOURCES

Break Apart Rails Monoliths Using This 1 Weird Trick

Somewhere around the 6-month mark of a project there's a heavy sense, looking at the codebase, that this is how it's going to be.

TL;DR

No business logic in models - limit to AR persistence, relationships, scopes, and utility functions
No business logic in views - rails anti-pattern, avoid in ERB
No business logic in controllers - limit to HTTP and MIME responses, fire off business logic objects
Put business logic in new ruby objects - use case objects, DCI contexts, or plain ruby objects
Groups of objects will self-organize into components - split the monolith at will


Blue Box Ruby on Rails Not like this... not like this...

In rails applications that heavy sense usually comes from looking at the sheer size of files in the models and controllers folders.

The reason: lots of business logic. The business wants something done? No problem. We have all these ActiveRecord classes and fields, let's use the hell out of them. Data's easy to get, even across multiple classes, thanks to the ease of building relationships.

Now... where to put the code? The common rule of thumb has seemed to be: whatever ActiveRecord class we start our .find from. (For some reason we've learned to treat classes as namespaces.)

Leap to get something done 50 or 100 times, and voila: giant classes with ill-defined public interfaces — classes that are simply not possible for a developer, new or pro, to comprehend and feel comfortable working with or refactoring. The more you use these mega classes the more it feels like luck that anything works.

My god... it's full of AR

Why did it have to be like this? Because subconsciously we felt constrained by the framework. We needed to build code, and there are already so many files and folders that creating more felt like a disservice. There's an app folder, so obviously the logic should go under app, right? And we're always working with objects, which are models, so models it is (unless we're starting from the controller).

No.

Remember the rule of Screaming Architecture: your directory structure should scream what your system is about. Does it scream "This is a medical application" or "This is a legal application", or does it scream "This is a rails application"? If the answer is the last, you get an idea of why you feel trapped. You're trapped in rails.

But dear developer: the framework is a servant of the application, not vice versa. If you embed your application logic in the framework, your application will lose its soul.

Keep your application and framework code well apart, as much as possible. Build your own directories of objects that represent the actual logic of the application.

Start with the next task, the next refactoring. The objects will feel oddly alone at first. But over time they will become organized. Soon they will 'scream' what the soul of your system is about.

And one day, you can relocate whole components to new subsystems, engines, or applications. They'll just be directories of your independent objects.

Shatter the monolith

Follow these basic rules to keep your system from creeping back toward the dark ages:

  1. 1. Create business objects
    Build objects that encapsulate one or a small number of use cases (a use case is a non-trivial scenario). Pass model objects into the initializer or into methods that kick off the use case. Let the business object orchestrate the dance between model objects: the queries and method calls and saves. (This way you can also write automated tests with non-ActiveRecord factories and stubs, which will execute much more quickly.)

    Many schools of thought have formed around OO ways of building systems that participate in frameworks: DCI (by the creator of MVC), hexagonal architecture, Objects on Rails, and basic use case-driven formulas like Clean Architecture.

  2. 2. No business logic in models
    This is simultaneously the easiest and hardest rule to follow, especially for legacy systems. If a model has 50 methods, there's a lot of gravity to just add a 51st. But really a rails model should just define relationships, scopes, and basic utility methods that provide simple conveniences. Almost everything else can be extracted, usually in whole sets of methods that do related things. Those related methods are almost always in service of fulfilling a use case, which should be orchestrated by a dedicated object.

  3. 3. No business logic in controllers
    Something needs to fetch the model objects -- that's historically been the controller. And with the objects at hand, the temptation is to just call methods directly. But the controller has enough on its hands. It has to validate requests and load ActiveRecord objects. It also needs to render responses for the desired MIME type and ensure that the correct HTTP code is returned. Any actual business logic should be delegated to business objects. This will save you a lot of duplication across the several controller CRUD actions.

  4. 4. No business logic in views
    Speaking of escaping the dark ages... no one fetches or uses objects in ERB views anymore, right? It's a hideously suboptimal technique from the foul legacy of PHP, and should remain banished. Keep views simple: paint the collections already provided by the controller.

  5. 5. Build directories of business objects
    These can live in app/ or at the top level. Add the directory to rails autoload_paths in config. As new objects are built look for opportunities to group into sub-directories, which represent components of the system. Write tests in corresponding test directories.

For legacy systems this is a long process, but also one that feels good immediately. And if you build a new system, prevent any baby monoliths from forming by creating a use case object for your very first requirement.

Do not underestimate the power of the dark side. Pull away from the framework monolith with all of your might.

Chris Galtenberg
Chris Galtenberg is the Engineering Applications Lead for Blue Box. He holds a Software Engineering degree, and for over 16 years has worked in many industries, startups, and languages. In his free time he strives to join his loves of writing, philosophy, and software. Follow on twitter: @galtenberg

rails ruby

SHARE

Blog
Early Adopters, The Time Is Now: OpenStack, On Demand

Early Adopters, The Time Is Now: OpenStack, On Demand

I am proud to announce Blue Box’s OpenStack Early Adopters Program. Blue Box’s On-Demand OpenStack is a first-of-its-kind offering that delivers true single-tenant private cloud environments that meet all five defining private cloud characteristics.

The Five Characteristics

The five following characteristics are the foundation of true private cloud:

  • Ease of Use - Private clouds should be incredibly simple to both deploy and use and require minimal drain on organizational resources.
     
  • Integrate with Existing IT Infrastructure - Private clouds should be able to tie back into legacy IT infrastructure including appliance based load balancers, security hardware (IDS / IPS / Firewalls / etc), storage and database infrastructure.
     
  • Security Policy Control - Private cloud should give customers fine grained security control at both the border level, as well as within the private cloud itself.
     
  • Cost Predictability and Control - Public clouds often leave customers with surprise bills. Private clouds should eliminate this uncertainty by providing simple, consistent billing that is based on raw cloud capacity, and not on a complex formula consisting of hundreds of different billing points.
     
  • Elastic Capabilities - To truly be considered a cloud, an implementation needs to be elastic. This is a challenge to on-premise implementations today, which, by nature require long procurement cycles to add additional capacity.
     

With all the options, why OpenStack Hosted Private Cloud?

While there are companies today that offer incredible on-premise OpenStack implementations, the truth is that for many buyers, it can take weeks or months to complete a deployment. Between budgeting cycles, data center capacity, hardware procurement or time to do the actual implementation, there are significant number of hurdles to overcome to move from implementation to utilization with an on-premise solution.

Blue Box is First to Deliver on all 5 Characteristics

Blue Box’s Hosted OpenStack Private Cloud is the first to market with an offering that delivers on all 5 private cloud characteristics. Because the offering is hosted, customers avoid the need to procure and deploy hardware, meaning implementations can be completed in hours or days versus weeks or months. Blue Box can tie deployments back into customers' data centers via Equinix’s ethernet direct connect or via VPN tunneling. Custom security controls can be implemented at both the private cloud border, and within OpenStack itself, ensuring compliance corporate security policies. And because the cloud is billed simply on capacity, bills are predictable and understandable.

Since you're buying capacity units, pricing is transparent and published as you would expect.

And most importantly, Blue Box’s is first to deliver true elasticity within a private cloud deployment. Cloud capacity can be requested on-demand and integrated into your cloud in minutes or days. This feature empowers organizations to grow total compute capacity as required without long procurement cycles.

Blue Box's offering enables you to work with OpenStack, not on OpenStack. Our teams will monitor your private cloud 24 x 7 and maintain not only the hardware, but keep your OpenStack software updated with continuous delivery of OpenStack releases. Let us be the OpenStack Operations experts so you can focus on making your business successful.

Why OpenStack?

Without a doubt, OpenStack is the Linux of cloud computing. With almost 1600 participating developers (and more joining every week) from more than 165 organizations and a well developed foundation with a strong sense of governance, OpenStack has the momentum and corporate support required to become the ubiquitous cloud computing platform. Since OpenStack’s first release in October of 2010 much in the code base has evolved, but without a doubt, the core technology has improved by orders of magnitude. From the public clouds offered from Rackspace, HP, IBM and others, to the on-premise offerings from Mirantis, OpenStack Hosted Private Cloud Cloud Scaling, Piston, Nebula, Meta Cloud, and more, OpenStack has never been easier to consume.

Blue Box believes in the OpenStack mission of cross compatibility. Hence, we built our offering using 100% upstream source code. We believe in the power of the converged cloud and we want our customers to utilize on-premise private, hosted private, and public clouds in a seamless manner. OpenStack is the best technology to deliver that capability.

Ready to get started?

Is your organization ready to experience the benefits of a hosted private cloud? Blue Box’s Early Adopter program was designed just for you. Reach out to one of the seven Blue Box team members at this year’s OpenStack Summit in Hong Kong and fill out our contact form to get started!

The future for both OpenStack, and for true private cloud shines bright, and Blue Box is delighted to hold the torch high.

- Jesse Proudman

Jesse Proudman
Jesse Proudman is the Founder and CEO of Blue Box. Jesse is an entrepreneur with an unbridled passion for Technology and the Internet’s infrastructure. With 16 years of hands on operating experience, Jesse brings vigor for corporate evangelism and product development mixed with an insatiable desire to win. You can find him on Twitter and .

open source openstack iaas openstack summit private cloud converged cloud on-premise early adopters

SHARE

Blog
Really Big Data: The Digitalized Health Record Explosion

Really Big Data: The Digitalized Health Record Explosion

Seizing the opportunities and avoiding the pitfalls of the 2013 HIPAA Omnibus Update

On September 23rd 2013, the HIPAA Omnibus Final Rule went into effect. This update is the most sweeping change to the HIPAA regulations since they were first instituted in 1996. For IT professionals the most interesting element of this update is the requirement that health care providers grant patients access to their health records in electronic format upon request. Couple that data access requirement with the Affordable Care Act’s (ACA) mandate that medical providers switch from physical patient charts to electronic records, and suddenly we've opened the door to a truly incredible Big Data revolution for healthcare IT.

“Any long term solution to the economic issues plaguing healthcare will involve the rise of smart machines.” -- Derek Collison, CEO of Apcera

Today, there are more than 250 million Americans with active health coverage and with the implementation of the Affordable Care Act, that number will most certainly increase. The wealth of data that each patient generates from every physician visit, medical test, prescription and ensuing medical transactions are enormous. The new electronic requirements will open a massive door of opportunity for companies to create technologies that capture and analyze the flood of ensuing data.

Just imagine the insights that can be derived from the ability to crunch medical test records from millions of users, or the prediction algorithms customers have become familiar with from Google and Facebook, but used for medical information. Imagine the ability to integrate the “internet of things” - day-to-day sensors (think FitBit or Jawbone, or digital scales) into real time medical information. The moment for transformative discovery has arrived and beyond the economic benefits for companies riding this wave, the social benefit of being able to drive down healthcare costs and realize better patient outcomes is unparalleled.

But as new startups enter the market to capture this tantalizing opportunity, they’ll need to remember that new HIPAA Omnibus does more that present them with a lucrative opportunity. It now tightens up regulations and adds teeth to their enforcement. Historically, the government has taken a fairly lax stance for those found to have leaked personally identifiable information (PII) and patient health data. These new Omnibus HIPAA regulations change the liability and fines in a dramatic way. And the Department of Human and Health Services has made it clear that they intend to hold organizations significantly more accountable.

This means that startups rushing headlong into the healthcare big data boom need to make sure their compliance strategies aren’t an afterthought. With great power comes great responsibility. And now there will be great penalties for those that do not take that responsibility seriously. Now the Omnibus regulation changes the maximum penalty for security breaches to $1.5 million per violation.

Unfortunately, unlike PCI compliance, ensuring HIPAA compliance isn't as simple as following a checklist of actions. HIPAA requires that an organization follows a number of "industry best practices" across a multitude of areas but does not define what those industry best practices actually are. The vagaries can leave IT organizations unintentionally exposed. Complicating things further, HIPAA compliance goes well beyond purchasing “compliant” hosting infrastructure. Applications must be designed in a secure way and internal policies and procedures have to be defined and enforced.

Startups like Accountable are entering the market to help make the HIPAA compliance process easier. Other companies can help provide a “HIPAA compliant” hosting infrastructure that is designed to meet those core industry best practices. Regardless, effective compliance that won’t expose you to risks means taking a thorough 360-degree approach. Now more than ever, it’s crucial to work with an auditor like Coalfire to help build your formal HIPAA compliance plan from top to bottom.

It’s rather straightforward – CYA. A seemingly small mistake could bring about massive penalties that will crush a startup. Don’t rush blindly after the revenue attached to the impending big data explosion in the healthcare industry. Respect and protect the data like you never have before, because too much is at stake.

This post was originally published on the Washington Technology Industry Association website on Sept 23, 2013.

Jesse Proudman
Jesse Proudman is the Founder and CEO of Blue Box. Jesse is an entrepreneur with an unbridled passion for Technology and the Internet’s infrastructure. With 16 years of hands on operating experience, Jesse brings vigor for corporate evangelism and product development mixed with an insatiable desire to win. You can find him on Twitter and .

open source iaas hipaa saas

SHARE

Q

We get it. Apps that are changing the world can't afford to be offline. Ever.


99.999% uptime. 24/7/365 live support.