Hacking Your Migration with Inheritance and Drush

At Phase2 we love the Migrate module. It provides so much out of the box to facilitate moving content into a Drupal site that someone says “eureka!” almost every week in our developer chatroom. Varied data sources, highwater marks, instrumentation, and the many advantages of working with a well-architected, object-oriented system are just a few of its great features. The full range of functionality is not needed on every migration–but on your next project that one unused feature can easily become the lynchpin of success.

On a recent migration project we found a number of problems that needed improvement near the end of the development schedule, and were able to make a number of surprisingly deep changes in short order. In this post I’ll share how we were able to move so quickly by discussing the code techniques we used and some of the specific changes we made.

This post is very technical and assumes you have some basic familiarity with implementing migrations using the Migrate module. While there is a lot of code below, this is about technical approach rather than reusable solutions. If you’d like to do some background reading first, check out Rich Tolocka’s blog on planning your content migration, or Mike Le Du’s overview of object-oriented programming in Drupal 8.

Using Inheritance to Fix Performance

Because the Migrate module uses a nicely object-oriented plugin system, it’s easy to create a custom version of any piece of functionality without needing to duplicate code. As with any custom-built migration, it starts with your base migration class, the controller of the entire process. Let’s get started with an example—migrating Lightning into our new Drupal site.

lighting-bottle-hollywood-journal

The Lightning source system is a Web API that provides two primary access mechanisms. The first resource returns a list of items from an offset (starting item) to a maximum number of results (limit). For our convenience, it also includes some “pager data” describing the total number of items that match our criteria, even if they are not all available in the immediate API response. For each item in the list, we have a unique ID which can be used to craft the second type of API request, this one to provide all the data available for the item with that ID.

These API calls are used in the following way to list all items:

  • Make a request for the “next” item to migrate with an query of items ordered by creation date. (Creation date is something we can safely treat as “immutable” and thus not going to change between requests.)

  • Request the details of this item so we can process the data for import.

  • Repeat this process until we’ve imported the total number of items as reported in the “pager data” mentioned above.

Now that we understand how to use the data source, it’s time to start putting together the code to manage the migration. All migrations start by inheriting the functionality of the Migration class.

Within its constructor, you can select non-standard plugins for the “map” (the class that tracks the relationship between your Drupal site and the origin system) and the “source” (the class that pulls data from the origin system). The ability to customize behaviors of your migration by surgically replacing code from the core Migrate module allows you to quickly make significant changes.

We need to create our own custom Source plugin to handle the collection of JSON data from the API. Our LightningMigrateSourceHttp class is set up to understand how to traverse the API to list items, and will dispatch the identifiers it extracts for import to an instance of a MigrateItem source handler. We will specifically use MigrateItemJSON because we need its capability of making API calls and processing JSON results.

While our main source plugin could be directly coded to use MigrateItemJSON, we’ll use dependency injection to pass in the class as a parameter. This keeps LightningMigration in control of the details of migration execution, and if we decide to swap out our MigrateItem instance for something more customized it will be a single-line change. Centralization of custom migration logic is really key to facilitate onboarding new developers, as tracing key decisions across an entire directory of classes takes much longer.

This worked quite well for importing all items initially but we encountered performance problems when looking to see whether an item needed to be updated. In order to identify the next item for import from the Lightning API, we need to ask for the next identifier matching our criteria before we can request all the items details to be processed. Crawling all the content of the “Lightning API” looking for items to update is not a very efficient or fast process because it introduces a lot of Internet latency (time between a request to a remote server and the response) into the process. Luckily, we already have a local source for all the nodes we might want to update: the migrate map table.

The Migrate module tracks all imported items via a “map” table automatically generated for each migration. This table has columns for the source identifier, destination identifier (e.g., nid), the current status of the item (does it need to be updated from source?), and a few other processing details. We are interested in the list of source identifiers that this table maintains, allowing us to replace web requests for local database queries.

In LightningMigrateSourceHttp we have implemented getNextRow()which the abstract MigrateSource class uses to identify the next item for import. This is the method that issues the listing API call we wish to replace. Let’s create LightningMigrateSourceHttpFromMap, a new source class that overrides the getNextRow() method with our new logic. Let’s swap it into our LightningMigration constructor:

The LightningMigrateSourceHttpFromMap class behaves exactly as its parent class, except it has dropped half of its web queries and saves 1-5 seconds per item by asking Drupal’s database how to find the next piece of content to import. Our new getNextRow() logic calls the following function to identify the next item from the map table:

If the return value is empty, we increment an offset counter in getNextRow() and try again, this allows us to skip broken entries until we find a usable row.

We also needed to override how we extracted the source IDs for import, these are both custom methods but are another demonstration of clean inheritance. First the API-driven data structure:

Now we replace it with a method that uses the SQL result object instead of the JSON response object.

A secondary impact of drawing our migrate IDs from the map table is the limitation that our code will only perform updates to already imported content. If we simply made this change as a direct replacement we would never be able to import new items from the source again. There may be a use case for that somewhere, but for our purposes we need both efficient updates and new item imports. It’s time to introduce some new run-time options to how we migrate the Lightning.

Introducing Optional Flags with Drush

At Phase2 we use Drush to run our migrations, it’s a great way to sidestep various memory limit and automation problems you might have using the administrative UI. Migrate has fantastic drush integration, but like any Drush command it has a specific list of options and flags it understands how to handle. That will not stop us.

We could  run the drush migration command with –strict=0 to allow us to use any flags we invent without complaint from Drush, but that is a bad practice: it creates invisible, unintuitive options that in many cases will not be learned by future developers on the project. Drush allows you to go a few steps further to add “official” flags to any command in the system. Let’s add an option to the migrate-import command, which is used to trigger migrations.

Now that we have a way to introduce options to the system, we can go ahead and vary the migration. A simple reading of the code above shows the –updates-only flag, let’s go ahead and support that with a quick code hack to our LightningMigration class:

Now instead of just one source plugin, we have a selection of two source plugins depending on whether the new Drush option is in use. Be careful to always check for the availability of Drush code before using any of its functions, this caution keeps the migration code compatible with the Migrate UI.

Executing Our Migration

Now we have two different migrations in one: the first operates using our performance enhancement and will only be able to update content, and the other uses the normal behavior. For a complete process of importing new content and updating any previously imported content, we need to run two different commands.

The first command marks all previously imported content as needing an update, and has the flag telling the migration to use our new database-driven logic. The second is a normal migration run, which will focus exclusively on importing new content. The latter can be considered “exclusively new”  because we assume all content needing an update will complete during the first command.

This partitioning of responsibility is also good for resource limits like memory usage since we now use two different PHP processes to handle the operation. Managing a large migration requires careful management of system requirements.

Ongoing Synchronization

Unfortunately the import of new content still has a problem. Part of our use case is the ability to periodically pull down Lightning to make sure the Drupal system remains in sync with the canonical data. This means our migration needs to crawl the entire Lightning API looking for new items to import. We just went to some effort to avoid such a broad-spectrum effort, so let’s make some more changes to complete this work.

Since this is an article about migration hacks and not perfect migration designs, we won’t talk about using highwater marks, even though that is the correct approach. In fact, I look forward to retrofitting highwater into the Lightning migration in the future. Instead, let’s consider something more simple:

  • Add a –recent-changes flag to Drush.

  • If this option is present, only list content created within the last few days by adding a date condition to the API call for listing content.

With these tweaks in place, our migration commands to only capture recent changes is executed a little differently:

Now we have a lean migration that is only concerned with updating for the last few days of changes. The downfall of this simplified highwater system is any series of days where this process fails will result in losing track of some changes, the actual highwater system records the dates when the process occurs and reaches back over days or weeks as needed. However, since this behavior is only used when manually triggered via –recent-changes we can use the complete migration to fill in any gaps.

Respecting the Source Provider

If your migration process involves a lot of content and many troubleshooting cycles, you should spare a thought for the provider infrastructure. Do you have exclusive use of its database to avoid disrupting other users? In our example we are leveraging API-provided data and respectful behavior means not pummelling the API with thousands of redundant requests.

Many APIs have a request throttling or rate limit mechanism to keep server resources available to all users. For use cases like migration where many API requests are needed it’s not only respect but necessity that will force us to take measures to avoid excessive API usage. Local caching is a great way to avoid extra HTTP requests and be a good Internet citizen, let’s change our code to first see if we have already cached the data before making a new API request.

You saw in the code above we passed MigrateItemJSON as a source plugin for extracting content from individual JSON records by requesting the item’s URL.

Let’s swap that out for our own class where we can tailor the caching.

Here’s what that class might look like in practice. This works because the Lightning data updates at most once per day, and we can always clear the cache if we need a clean sweep.

The simple inheritance of MigrateItemJSON allows us to override a single method to wrap cache handling into the system. Now we can re-run the migration repeatedly while testing the process without producing an excessive number of requests to the source API. This change is focused on adding a caching layer, so we are still calling out to the parent implementation of loadJsonUrl()to make the actual API call. The ability to layer lean slices of functionality is what makes object-oriented reuse fun. (If you want to implement the above code yourself, don’t forget to also create the cache table and wire up hook_flush_caches()!)

This change does not need any options added, though it might be interesting to facilitate a cache bypass for spot-testing the end-to-end process. From a Drush mindset, that might be as simple as adding a check for whether the drush migrate-import option –idlist is in use and skipping the cache for the specified items.

A Taste of Drupal 8

Approaching your migration with an eye for all the powerful tools of object-oriented code is a great way to get a taste of Drupal 8 development. Creating strong object-oriented code architecture is a skill in it’s own right but the Migrate module has been polished for years and is a great place to start.

Very often a clever migration hack is too specific to easily reuse, but if you’ve got a trick please share in the comments below! Your thought process and specific points in the code you use to customize your migration can be reused even if your use case cannot.

Subscribe to our newsletter to keep up with new projects and blogs from the Phase2 team!

Large Scale, Server-Side Mapping in Drupal with the Leaflet-Geocluster Stack: Part 1

On a recent project here at Phase2, we were tasked with creating a responsive, data-scalable, dense-point map in Drupal 7. The sticking point for this application was that the starting data scale of about 18k data points needed the ability to scale up to around 104 data points. We wanted the page load time to remain under one second on the Drupal 7 application stack.

Our initial knee-jerk reaction was to question whether Drupal was the right tool for this job. We began to research for a way to implement this completely in Drupal. We did find a way: we ended up implementing the Leaflet-Geocluster stack.

In this two-part blog series, we’ll take a look at server-side mapping and look into the performance bottlenecks (and what we can do to mitigate them). Then we’ll take a closer look at our implementation, our pain points, and a few key application-specific customizations. The hope is that you’ll have enough details and reference material to implement your own large-scale mapping application.

The end result is currently in production on the Volunteers in Service to America site. If you’d like to follow along, the code has been released under the GPL and is available on github.

The Problem

How do we use Drupal and current mapping technology to provide a responsive map application that can scale up to 10,000 data points and is usable on any device?

The solution to this problem was twofold: first, use server-side processing to produce clusters, then amortize delivery of user-requested data over multiple workflows.

Why is this a difficult problem?

In Drupal, entities that we want to map will usually store an address (with Address Field or Location) in which geodata (latitude-longitude coordinates) are encoded into a separate field (usually Geofiled). Then we either map one entity via a field formatter within an entity view mode, or we use Views to query for a list of geocoded features that act as a data feed for a map.

Current high-density point strategies are mostly limited to client-side clustering, which produces a rendering similar to the following:

Screen Shot 2015-06-08 at 9.08.23 AM

Here, we have a usable interface where we can click to zoom in on point clusters to get at the area of interest. As we zoom in, clusters regroup themselves and clickable points start appearing. This is a very familiar interaction, and implementing this with client-side clustering works pretty well when we are mapping on the order of 102 points.

Once we get to a number of points on the order of thousands, tens-of-thousands , and upwards, client-side clustering is not effective due to the page load time to render out the point data before the mapping library even has a chance to transform the data to a visualization.

One of the reasons this breaks down is that in the modern world of various, unpredictable devices that will be consuming our content, we don’t have enough information to theorize about how much client-side processing is appropriate. When we get to the scale of thousands per display, we need to assume the lowest-common denominator and offload as much processing to the server as we can in order to minimize the client-side computational load.

To see why, lets take a brief look under the hood.

Screen Shot 2015-06-08 at 9.11.48 AM

In this diagram, we are using Geofield, Views, and Leaflet to produce a map with client-side clustering. The server side is on the left, and the client (browser) side is on the right. Geofield stores the geodata, and a Views query produces either a single point or an array of points. In either case, PHP is rendering the point data one row at a time, and the client-side clustering happens after this delivery.

Te reason this breaks down at larger scale is fairly logical: Geocoded data is encoded in text-based formats like WKT or GeoJSON that must be parsed and processed before rendering the map. Obviously, the larger the dataset, the longer the receive-decode cycle takes. Further, if point data is delivered via PHP during page load–as opposed to asynchronously with AJAX–the whole page will not start rendering until all of the point data has loaded.

Speaking in terms of sequence at any scale, the load process looks like this:

  1. Views (PHP) renders each data point as a row of output, one at a time at page load time.

  2. Views (PHP) renders the popup info (hidden) at page-load time.

  3. The mapping library (JS) parses the location data.

  4. The mapping library (JS) clusters the points.

  5. The mapping library (JS) renders the map.

In this single cycle of receive-decode-render, PHP delivers the raw data, and JavaScript performs the transformation of the data into the visualization. At large scale, the client side is shouldering the majority of the computation, and page loads become highly dependent on the efficiency of the client device. Lightly-resourced devices, or even older PC’s will suffer unusable page-load times.

In order to improve the performance at large scale, we want to perform clustering on the server side, so that the client-side stack only sees any given cluster as a single feature. We have an idea of what the server can handle, and by offloading the more-complex computations to a more predictable environment, we can normalize the performance across devices.

In layman’s terms, we’re simply reducing the number of “things” the client-side browser is seeing in the map, i.e several clusters vs. thousands of points.

The next question is how to implement clustering on the server side. It turns out that we could borrow from a recently-developed web service: geohash.org.

geohashing

Geohashing is a fairly-recent development in geolocation. In 2008, the Geohash Algorithm was developed by Gustavo Niemeyer when creating the the Geohash web service. The service was initially developed to identify geographic points with a unique hash for use in URIs. For example, the link http://geohash.org/u4pruydqqvj uniquely identifies a location in the northern tip of Denmark.

The Geohash service simply turns latitude-longitude pairs into a hash code, appropriately named a “geohash.” Any given point can be geohashed to arbitrary levels of precision, which is represented by the hash length. The shorter the hash, the less precise the hash is, and vice versa. The wikipedia overview of geohashing offers a good example of how the hashes are produced. For our purposes here, the important idea is that Geohashing a group of points creates a “spatial index” (an abstract search index) from which it is computationally cheap to infer relative proximity of points.

The Geocluster Module

The Geocluster module provides a Drupal implementation of the geohash algorithm that integrates with Geofield, Views GeoJSON, and Leaflet to provide server-side clustered GeoJSON map feeds (OpenLayers could likely be swapped out, but nothing has been documented towards that end, but its just a GeoJSON feed that needs to be consumed).

The module is under active development and offers many opportunities for optimization. The project was originally developed by Josef Dabernig (dasjo) as a proof-of-concept for a Master’s thesis on large scale mapping in Drupal. For those interested in performance optimization, the thesis is worth a read.

In a nutshell, three server-side clustering strategies were compared against client-side clustering as a baseline, with a target of 1-second page load.

Screen Shot 2015-06-08 at 9.19.45 AM

Here we obviously see that as we transition from 100 to 1000, client-side clustering becomes a lost cause. Even clustering in PHP after the database query (known as post-query clustering) is not of much help either. We start to see usable performance gains once we move to query-level clustering with MySQL or Apache Solr.

We ended up implementing MySQL clustering and were able to achieve less than one-second page loads. At the time this application was developed, Solr clustering was still under development, and whether or not it can really scale up to well beyond 100,000 is not something we know for certain.

Again, empirically, we know that client-side clustering starts to break down beyond a few to several hundred features, which lines up with the performance benchmarking in the plot. This is a convenient threshold to switch from client-side to server-side with query-level clustering. There is some grey area between several hundred points and 1000, so testing your use case can determine what is best for you.

 Onward!

In our next post, we’ll look at the recipe for the Leaflet-Geocluster stack, and take a look at how it was implemented. We’ll cover our pain points, what we did about them, and some customizations towards the application.

Subscribe to our newsletter to keep up with new projects and blogs from the Phase2 team!

“Reply to Anything” in Open Atrium

atrium-logo (1)Since it’s beginning, Open Atrium had Discussion Forums, allowing members to collaborate on various topics.  In the Winter 2015 2.3x release, we added Related Content, which allowed you to attach a Discussion Post to other content, such as an Event.  But what if you wanted to have a discussion around a piece of content directly without creating a separate related discussion post?  In the new Spring 2.4x release, you can “Reply to Anything,” whether it’s core content such as an Event or a custom site-specific content type.

Drupal Comments

At a high level, the “Reply to Anything” release of Atrium was a simple task of enabling normal Drupal comments for any content type.  The Atrium Discussion forum didn’t use Comments, but instead used the same content type for “Replies” as for the “Original Post.”  While this was architecturally convenient and allowed Replies to contain attachments, it didn’t allow Replies to be added to other content types easily.

Comments in Drupal tend to get a bad rep.  Many feel that comments look ugly, don’t support rich content such as attachments, or are subject to spam and require serious moderation.  The challenge for Atrium was to enable Comments while dealing with some of these complaints.

Improving Comments

Open_Atrium_comment_screenshotSignificant testing and feedback went into the original design of the Atrium Discussion forums.  We decided to implement the same functionality for Comments, plus some new features:

  1. Personalization: causing new comments to be auto-expanded, while old comments are collapsed.
  2. Attachments: rather than just allowing attachments to be added to Comments, the entire Related Content paragraphs architecture was re-used.  More on this below.
  3. Migration: previous Discussion Replies are migrated automatically into Comments.
  4. Easy Administration: rather than editing the content type to enable Comments, a central UI interface is used to choose which content types use comments.
  5. Threaded Discussions: support comments to comments, allowing fully threaded discussions.

The result is a consistent and intuitive interface across Atrium for handling comments to content, whether it’s a Discussion post, a worktracker Task, an Event, a Space, or any other type of content.

Rich Comment Content

Re-using the Related Content work from v2.3x, we were able to support very rich comment content.  For example, the screenshot in the previous section shows a comment with an image and two columns of text.  Rather than just using the WYSIWYG to embed an image, that comment uses the Media Gallery paragraph type to add the image, along with a Text paragraph to add two columns of text.  You can even use the Related Content to embed another live discussion along with it’s own comments and reply form within another comment.  Comment Inception!   In the past you could only add a file attachment to a Reply.  With Related Content you can add a Related Document to a Comment, which might be a file attachment, but might also be just a Wiki-like web document.

Open_Atrium_comment_3_screenWhen integrating the Related Content we also did a large amount of UX improvement.  The different paragraph types are now represented with icon “tabs” along the bottom of the WYSIWYG editor, much like the tabs at the bottom of your Facebook status field.  Using a Drupal hook you can even specify icons for your own custom paragraph types!  This new UX for Related Content paragraphs was taken from Comments and then extended to work on the Body of the node/edit form, providing a consistent Related Content experience across all of Atrium.  You can separately control which paragraph types are available for the node Body vs available for Comments.

What can I do with all this?

Technical features are fine, but it’s really all about the client needs that can be solved.  Here are some of the use-cases you can solve now using Atrium:

  1. Feedback and Collaboration on Anything:  Threaded discussions on any content type, not just the Discussion posts, without needing to use Related Content.  Because of Atrium’s strong data privacy controls, comments are added by Members of a Space and are less subject to spam or needing moderation.  However, full comment approval moderation is also still available.  Comment threads can be open or closed on a per-node basis.
  2. Social Feeds: Enable comments on Space, Section, or even Team pages, providing a “Status Feed” functionality.  Users can quickly and easily post comments (status updates) and have them appear in the Recent Activity.  If you enable comments on User Profiles (from the Open Atrium Profile2 app), you can even support the concept of a “Facebook Wall” where users can post comments (status) on a specific user’s profile dashboard.  These are areas still requiring some improvements to the UX that you will see in future versions of Atrium to make this a more useable social experience, but you can get started with it now.
  3. Fieldable Comments:  By adding new paragraph entity bundles, you are essentially adding optional fields to comments.  Developers can define templates to control the edit and view experience for custom fields.  Using the included Comments Alter module, comments can actually change the values of fields on the parent content node, such as the Status, Type, and Priority fields on the worktracker Task content.
  4. Email Integration: As with past Discussion Replies, adding a Comment causes a notification email to be sent.  Users can reply to the email and the reply will be posted back to the Atrium site.  This now works with any comments on any content type, such as replying to comments from a worktracker Task.

Conclusion

Many users of Atrium have asked for comment support, which was specifically disabled in past versions.  Now Atrium fully supports the Drupal Comment system and everything sites want to do with it.  Integrating the Recent Content work into Comments provides powerful functionality that is implemented consistently and intuitively across the entire platform.  Allowing Comments on anything further pushes the core mission of Atrium to enable and enhance collaboration across your organization.

Want to learn more or see the new comments in action?  Sign up for the Open Atrium Spring Release webinar on Thursday, June 4th, at 12:30pm EST.

Configuration Management in Drupal 8

Drupal 8 is pretty exciting for many reasons: decoupled templating system, built-in internationalization, views in core, and Symfony are some of the highlights. But one of the most exciting Drupal 8 advancements is a little difficult to understand – the Configuration Management Initiative (aka CMI). CMI was one of the main initiatives identified when Drupal 8 was in its planning stages, and through the hard work of many core developers and contributors like Greg DunlapDavid Strauss & Alex Pott (pictured below, photo credit: Angie Byron), it is a core part of Drupal 8 (THANK YOU!!).

Screen Shot 2015-05-27 at 4.56.56 PM

First things first, I’d like to clarify what “Configuration” means in this context. Configurations are settings in Drupal’s interface that get stored in the database with the content but actually impact the structure and functionality of the site.

Simply stated, CMI evolved Drupal to have a underlying architecture that facilitates easy saving, porting and moving configuration changes from one environment to another. If you have one site, this is not a huge problem – but if you have more than one let’s say several hundred or even 10, this can improve process and stability in huge ways.

Drupal 5 & Drupal 6 – Configuration management by process

My experiences in Drupal 5 and Drupal 6 revolved around managing configuration changes through process, communication and documentation that existed outside of the core system. While there were some early adopters experimenting with other solutions (eg.  Features) the main ways of managing configuration between environments that I was familiar with in the early Drupal Days were process based and centered around communication with team members to make changes.

Screen Shot 2015-05-27 at 5.00.07 PM

Here is one “sample” process practiced by some of my Phase2 co-workers in the “phase 1″ days when we needed to reskin or redesign any music websites on a Drupal 6 platform:

  • Step 1 – Make your changes on stage or dev site for content or configuration.
  • Step 2 – Push your code up that might change themes or colors or modules.
  • Step 3 – Enter in your *new* content in the live site but make sure it’s unpublished
  • Step 3 – Get the client to take a look at the stage site, have your QA team sign off
  • Step 4 – Get launch approval
  • Step 5 – Prepare for launch
  • Step 6-  Review the txt files or checklists of configuration changes
  • Step 7 – Start exporting views
  • Step 8 – Put the site in maintenance mode / turn off user logins (maybe you are freezing the database)
  • Step 9 – PUSH UP THE CODE PART
  • Step 10- GO CRAZY FOR .5-6 HOURS depending on how the site was different updating all the configuration – the views, the checkboxes, the new blocks that need to go in a newly specified theme region
  • Step 11-  Double check work
  • Step 12 – Pull back the curtain & cross your fingers

What could go wrong in a 12 step process with multiple handoff points? I like to call this the checkbox problem. If anyone has ever gotten into administering, site building, or managing a Drupal site, you might know that there are lots of checkboxes. These checkboxes are settings that make for a  very powerful, extensible, and flexible CMS. The gray area between content and configuration is not always clear to folks when they have a CMS. The ability to change anything does not always mean that you should.

Drupal 7 – Features and Automation

Next came Drupal 7 and automated builds, when Features connected with continuous integration solutions like Jenkins. When this became an issue, the Drupal community did its best to find a solution. Features, originally designed to be a packaging module for site features, ended up being the de facto way of managing configuration in Drupal 7.

Developers set up elaborate features in code and then pushed them through the environments. Configuration changes while site building was definitely an ideal workflow for many developers, especially as automated builds became more regular.  Although this worked out well, it still felt “bolted on” and issues with overrides were common.

OLYMPUS DIGITAL CAMERA

Mike Potter recently wrote about Features in Drupal 8, and he spoke on the issue at DrupalCon. Mike envisions a world where Features is not for stable deployment but used as originally intended alongside CMI.

Drupal 8 – it’s YAML time

Drupal 8 configuration management is rooted in what is essentially a re-architecture of the way in which configuration files are stored and managed. Modules store their configuration settings in YAML files which is a core standard that is applied to all modules and this baseline architecture creates flexibility for solutions.

My work on a recent Drupal 8 project with the Memorial Sloan Kettering team would not have been possible without underpinnings of CMI.  When I spoke with the team, one of the key sound bites came from Drupal lead / CMS consultant at MSK, Jacob Rockowitz: “It just works.”

In fact, the solution pioneered by MSK Tech Lead Jacob Rockowitz for webform management was based in YAML.  We had a chance to discuss the YAML forms approach with some key community contributors at a BOF in DrupalCon. One of the exciting places of expansion is seeing how the innate architecture of CMI in Drupal8 will be extended for innovative solutions like YAML forms.

Screen Shot 2015-05-27 at 5.05.48 PM

DrupalCon D8 & CMI

Want to hear a little more historical context & see the D8 CMI in action with a real life example workflow? Watch the recording of CMI on a Managed Workflow session from DrupalCon Bogota where I presented with on Matt Cheney of Pantheon on the Latin American leg of his CMI world tour.  More CMI wizard tour highlights include an architectural bent from Amsterdam & beta 10 live demo from DrupalCon LA.

Wondering where features is in the mix of the configuration landscape in Drupal 8? Watch the features guru, fellow Phase2er Mike Potter breakdown the Features in Drupal 8 story at DrupalCon LA.

Curious to hear about the adventure of building a site in Drupal 8 on top of a configuration system that “just works?” Read our case study, or watch the  panel in the DrupalCon Business showcase to hear the story from Memorial Sloan Kettering, Digitas and Phase2 team members as we talk redesign strategies, community showcase, D6–> D8 migration, front end integration, and more!

MSK Presenters

Want to stay up to date with the latest in Drupal 8 and beyond? You can hear about all our Drupal 8 adventures in Phase2’s newsletter.

Visual Regression Testing Part 2: Extending Grunt-PhantomCSS for Multiple Environments

Earlier this month, I explored testing dynamic content with PhantomCSS in the first post of a multi-part blog series on visual regression testing. Today, I dive a little deeper into extending grunt-phantomcss for multiple environments.

Automation is the name of the game, and so it was with our visual regression test suite. The set of visual regression tests that we created in part 1 of this series allowed us to test for cosmetic changes throughout the Department of Energy platform, but it was inconvenient to fit into our workflow. The GruntJS task runner and the Jenkins continuous integration tool were the answer to our needs. Here, in part 2, I’ll walk through how we set up Grunt and the updates we contributed to the open source grunt-phantomcss plugin.

Grunt is widely used nowadays, allowing developers to script repetitive tasks, and with the increasingly large repository of plugins it would be crazy not to take advantage of it. Once installed, typically through NPM, we set up a Gruntfile containing the list of tasks available to developers to execute through the command line. Our primary task would not only be in charge of running all of the visual regression tests, but also allow us to specify the environment we wished to run our tests against. With DOE, we have four such environments: local development environments, an integration environment, a staging environment, and our production environment. We needed the ability to maintain multiple sets of baselines and test results, one for each environment. Achieving this required an extension of Micah Godbolt’s fork of grunt-phantomcss.

The grunt-phantomcss plugin establishes a PhantomCSS task to run the test suite. In the original plugin all baselines and results are stored in two top-level directories, but this is not ideal because it conflicts with the notion of modularity. Micah Godbolts’s fork stores each test’s baseline(s) and result(s) in the directory of the test file itself, keeping the baselines, results, and tests together in a modular directory structure with less coupling between the tests. This made Micah’s fork a great starting point for us to build upon. Adding it to our repo was as easy as adding it to our package.json and running npm install.

Screen Shot 2015-05-04 at 3.12.33 PM.png

Grunt

After mocking up our Gruntfile in the grunt-phantomcss documentation, we needed to specify the environment to run our visual regression test suite against. We needed the ability to pass a parameter to Grunt through the command line to allow us to execute a command such as the one below.

First, we needed to establish the URL associated with each environment. Rather than hard-coding this into the Gruntfile we created a small Gruntconfig JSON file of key-values, matching each environment to its URL. This allows other developers to easily change the URL depending on their environmental specifications.

Importing the key-value pairs from JSON into our Gruntfile was as easy as a single readJSON function call.

Next, we needed a Grunt task that would accept an environment parameter and pass it through to grunt-phantomcss. This way CasperJS could store these baselines and results in a particular directory for the environment specified. We achieved this by creating a parent task, “phantom,” that would accept the env parameter, set the grunt-phantomcss results and baselines options, as well as a new rootUrl option, and then call the “phantom-css” task.

 The rootUrl option is what eventually passes to CasperJS in each test file to prepend to the relative URL of each page we visit.

Extending Grunt-PhantomCSS

Now that the Gruntfile was set up for multiple environments, we just needed to update the grunt-phantomcss plugin. With Micah’s collaboration we added a rootUrl variable to the PhantomJS test runner that would accept the rootUrl option from our Gruntfile and pass it to each test.

 We made sure to maintain backwards compatibility here by keeping the rootUrl directive optional so old integrations of the grunt-phantomcss plugin would not be adversely affected by our updates. Now almost there, the final step was to update our tests to account for the new rootUrl variable. Here we prepend the rootUrl to the now relative page url.

With the grunt-phantomcss plugin updated, we were able to run our visual regression tests against multiple environments, depending on our needs. We could run tests after pull requests were merged into development. We could run tests after every deployment to staging and production. We could even run tests locally against our development environments as desired.

Bonus: Tests for Mobile

After all our success thus far, we wanted to add the ability to specify the viewport to our Gruntfile. We have particular tests for each of our four breakpoints: full-size for desktops, mid-size for large tablets, narrow for “phablets”, and tiny for phones. This was an easy lift, just requiring a few more tweaks to our Gruntfile.

Screen Shot 2015-05-04 at 3.11.59 PM.png

Here we set up four sub-tasks within the “phantomcss” task, one for each breakpoint. Each subtask specifies the viewport size and the location of the associated test files. Then we updated our parent task “phantom” to take two arguments: an environment parameter and a breakpoint parameter. Both also needed defaults in case either argument was not specified.

Additionally, we didn’t want a single test failing to halt the execution of the rest of our tests, so we added the grunt-continue plugin to our package.json. Grunt-continue essentially allows all tests to run regardless of errors, but will still cause the overall “phantom” task to fail in the end if a single test fails. Here is what our new “phantom” task looks like:

It was a success! Through the power and versatility of Grunt and the various open source plugins tailored for it, we were able to achieve significant automation of our visual regression tests. We were happy with our new ability to test across a range of environments, combating regressions and ensuring our environments are kept in a stable state.

But we hadn’t reached our full potential yet. The workflow wasn’t fully automatic; we still had to manually kick off these visual regression tests periodically, and that’s no fun. The final piece of the puzzle would be the Jenkins continuous integration tool, which I will be discussing in the final part of this Department of Energy Visual Regression Testing series.

Subscribe to the Phase2 mailing list to learn when the next post in the visual regression test series goes live!

Phase2 Takes Los Angeles: Watch All Our DrupalCon Sessions!

What a Week!

It’s hard to believe DrupalCon 2015 has already come and gone. As always, it was an event jam-packed with knowledge sharing, learning, sprinting, and of course a healthy dose of fun as we celebrated with the Drupal community. Add some virtual reality and 360 degree videos into the mix, and it’s safe to say we had a fantastically geeky time in Los Angeles (just the way we like it!).

17639851819_5d97bf505f_o

Catch All the Recorded Phase2 Sessions

If you weren’t able to attend all the Phase2 sessions you were looking forward to, never fear! Thirteen Phase2 experts presented at this year’s DrupalCon, and each of their sessions is already available online. Catch up on them here:

 

17023529553_a8078d3299_o

Plus – Drupal 8 Sessions!

After launching one of the first enterprise Drupal 8 sites in the United States earlier this month with Memorial Sloan Kettering Cancer Center, we were excited to share what we’ve learned at DrupalCon. Watch our Drupal 8 sessions now!

 

image6

Want to see more photos of the Phase2 team at DrupalCon Los Angeles? Visit our Flickr account!

Subscribe to the Phase2 mailing list for more from the Phase2 experts!

Driving Drupal 8 Forward Through Early Adoption

Last week, we were proud to announce the launch of Memorial Sloan Kettering Cancer Center’s enterprise Drupal 8 site, one of the first major Drupal 8 implementations in the U.S. One of the awesome benefits of working on this project was the opportunity to move Drupal 8 forward from beta to official release. Phase2 has been instrumental in accelerating Drupal 8, and we were excited that Memorial Sloan Kettering was equally invested in giving back to the community.

Benefits of starting early

Getting started during the beta phase of Drupal 8 meant that it wasn’t too late to fix bugs and tasks. Even feature requests can make their way in if the benefits outweigh the necessary changes to core.

Similarly, if other agencies and shops starting to use Drupal 8 are going through many of the same issues, there is more of an opportunity for collaboration (both on core issues and on contrib upgrades) than on a typical Drupal 7 project.

MSK, first Drupal 8 site launched

By the numbers

As of this writing, 57 patches have been directly contributed and committed to Drupal 8 as part of this project. Additionally, nearly 100 issues have been reviewed, marked RTBC, and committed. Hundreds of old and long neglected issues have been reviewed and moved closer to being ready.

Often, to take a break on a particularly tricky issue, I’d switch to “Issue Queue Triage” mode, and dive into some of the oldest, most neglected corners of the queue. This work brought the oldest Needs Review bugs from ~4 years to less than 4 months (the oldest crept back up to 6 months once I started circling back on myself).

This activity is a great way to learn about all the various parts of Drupal 8. Older issues stuck at Needs Review usually need, at minimum, a substantial reroll. I found that once tagging something with Needs Reroll, there were legions of folks that swooped in and did just that, increasing activity on most issues and getting many eventually committed.

One of my favorite but uncommitted patches is adding Views integration for the Date module. It’s still qualified as Needs Review, so go forth and review! Another patch, which is too late for 8.0.0, adds a very basic draft/moderation workflow to core. This patch is another amazing example of how powerful core has become–it is essentially just UI work on top of APIs already in Drupal 8.

Brad Wade, Phase2 Developer at DrupalCon Los Angeles

Porting contrib modules to Drupal 8

This project has contributed patches and pull requests for Drupal 8 versions of Redirect, Global Redirect, Login Security, Masquerade, Diff, Redis, Memcache, and Node Order.

One of the remarkable things about this project, and a testament to the power of Drupal 8, is how few contributed modules were needed. Compare some 114 contrib modules on the Drupal 6 site, to only 10 on the Drupal 8 site.

Considering Drupal 8 for your organization? Sign up for a complimentary Drupal 8 consultation with the Phase2 Drupal 8 experts

 

Evan Liebman Talks Drupal 8 and the Importance of Community at MSK

Over the past several months, our team has had the pleasure of helping to build Memorial Sloan Kettering Cancer Center’s (MSK) new sites on Drupal 8. Evan Liebman, Director of Online Communications and Technology at MSK, shares his experience with Phase2, Drupal 8, and everything in between.

Q: How did the culture of innovation and leadership at MSK play into the decision to adopt Drupal 8?

A: When we were evaluating our CMS options, what drew us to Drupal 8 was its clear alignment with several of MSK’s strategic pillars. First, innovation. We have researchers and clinicians at MSK who regularly push boundaries to innovate and generate new knowledge. We are inspired by their relentless efforts and are driven to do the same in our space. Second, sustainability. Because we were migrating from a Drupal 6 site, we had to choose between upgrading to Drupal 7 and quickly following a launch with a move to Drupal 8 or making the leap to Drupal 8, which was still a beta. We saw more of a long-term future with Drupal 8. Third, talent recruitment. The use of Symfony and Object Oriented Programming in Drupal 8 means that Drupal is becoming more accessible to more developers. In essence, their inclusion is a signal that the Drupal community will only continue to grow, Drupal and MSK growing with it.

Q: What was the most unexpected thing about building in Drupal 8?

A: To be honest, the most surprising part was how easy it was! From the things we’d heard, we thought it was going to be extremely difficult, and there was a learning curve involved. But once we’d gotten past that, Drupal 8 wasn’t as challenging as expected. The tools built into Drupal 8 core really helped to speed up the process. For instance, we went from 40 custom modules on Drupal 6 to 10 on Drupal 8 because more functionality was included in core.

Q: What is the biggest benefit of building in Drupal 8?

A: The most beneficial thing about Drupal 8 is the community effort that surrounds it. This is really important to us at MSK. Our clinicians and researchers work together across departments and specialties to give our patients the best care possible. It’s a multi-stakeholder effort. So the opportunity to be a part of the Drupal community, giving to others and knowing that in some ways it comes back to us — that was a major benefit of Drupal 8. Then, of course, there’s the added benefit of having the most up-to-date technology, which is important for being on the cutting edge in the healthcare industry.

Q: What are the most important Drupal 8 modules/code Memorial Sloan Kettering contributed back to the community?

A: One of the most exciting contributions is an alternative to Web Forms we created using YAML, thanks in large part to Jonathan Hedstrom of Phase2 and Jacob Rockowitz of Big Blue House. We’ll be sharing more details on that one at DrupalCon LA. But a big part of our contributions related to un-blocking issues and fixing bugs in core. As of now, 53 patches have been directly contributed and committed to Drupal 8 as part of this project, and nearly 100 issues have been reviewed. All of this work has kept Drupal 8 moving forward toward an official release.

Check out Jonathan Hedstrom’s blog post for details on specific Drupal 8 patches, issues, and modules!

Q: So we’ll be seeing Memorial Sloan Kettering at DrupalCon?

A: Yes, you will! This was a really unique project in that it brought together three different organizations — MSK, Phase2, and DigitasLBi — as we all collaborated to learn the ropes on new technology. We feel it is important that we tell our story together, especially because there is a lot to tell! So you can look for us on stage at the business showcase at 10:45am on Wednesday. I will also be joining Frank Febrraro and Michael Le Du of Phase2 to discuss Drupal 8 for enterprise organizations on a panel called “Drupal 8, Don’t Be Late.” In addition, MSK will participate in several BOFs throughout DrupalCon, including one focused on the YAML Forms module — so stayed tuned for more information on those!

Q: What are some ways in which the MSK team collaborated with Phase2 and Digitas to move the project forward?

A: Before we can effectively collaborate with our partners, we develop a core internal team to help us navigate through the project. Then, it’s very important that each organization had a seat at the table from the beginning, so everyone could see the roadmap from the start. Equally crucial was keeping open lines of communication. MSK really prioritized internal and cross-organizational communication, and that paid off during the later stages of the project.

Q: What advice would you give to other enterprises embarking on a Drupal 8 project?

A: Take the plunge! There’s a sense in the community that Drupal 8 is daunting, and although that may be true in the beginning, your velocity picks up quickly. There’s definitely a learning curve, but it lasts for a relatively short period of time. So if you’re on the fence, go for it! But make sure you choose wisely when selecting your partners. We chose Phase2 because of their experience as early adopters of previous versions of Drupal, and that expertise served us well.

Learn more about Phase2’s work with Memorial Sloan Kettering here.

8 Must-See Sessions at DrupalCon Los Angeles

DrupalCon Los Angeles is just around the corner and there are a ton of awesome sessions to attend. Every year, the top minds in the Drupal community present their thoughts on the tools and processes that will shape the future of Drupal. As always, the number and quality of the sessions available for consumption at DrupalCon is immense.

dcon

Having been part of the DrupalCon planning committee, I’ve been privileged to help shape the track selection criteria, review session submissions, and provide support to other committee members. To help others narrow their options for session attendance, I’ve created a short list of sessions that have me excited to attend.

Disclosure – While I’m excited that there are several sessions have been submitted by the fine folks at Phase2, in the interest of neutrality I’ve deliberately excluded sessions submitted by my co-workers here. You can read up on Phase2′s sessions here.

What are the trends?

One advantage of having to review all the sessions on the Coding & Development track is quickly becoming acquainted with the patterns emerging in the community. This year, of course, there was a plethora of Drupal 8 sessions – which makes sense given that D8 is in beta.

Like many in the Drupal Community, I’m excited to see the new features and improvements to Drupal core. However, the move to a more Objected-Oriented philosophy means that the old procedural ways of doing things are shifting. I’m looking forward to the Symfony & D8 sessions addressing this.

The other trend that has been taking the Drupal world by storm is Headless Drupal. As far as I can tell, the common thread here is Drupal’s front-end theming layer being replaced by JavaScript applications such as Angular, React, or Ember. Drupal’s role in the process is one of a database UI to curate content and manage non-theme layer configuration, such as editorial workflow and third party content aggregation.

headlessdrupal

8 must-see DrupalCon sessions:

Drupal 8: The crash course

DrupalCon without Larry Garfield (Crell) would be like spring in the Northwest without daffodils. I routinely enjoy his talks and blog posts, and he excels at presenting a complex topic as a series of easy-to-understand bites that clearly explain even the thorniest of topics. His session this year covers an introduction to the new systems introduced in Drupal 8, which assumes (but does not require) some knowledge of Drupal 7. This is one of many Drupal 8 sessions, but unless you’ve already been working in D8, it wouldn’t be a bad idea to target this one in particular.

Drupal 8′s render pipeline

This session focuses on the new ways that Drupal renders page content in version 8 – specifically the new caching regime that caches entities instead of just nodes, improved cache invalidation (cache tags, & bubbling FTW!), and so forth. Cache tags mean that individual cache components can be reset on certain events (expiring blog post feed cache when there’s a new post, caching the latest version of the page on node save, not after 2 hours). This is a huge win for performance nerds, and will have a significant effect on Drupal’s performance as a standalone application, as well as being part of a larger web stack.

What’s next for Drupal.org: Updates on Strategic Initiatives

Josh Mitchell, the new CTO of the Drupal Association has come up with a set of strategic initiatives to improve D.O. as a resource for all members of the Drupal community. This is something that I feel has been some time coming, and I for one am excited to hear CTO Mitchell’s ideas for the future of D.O.

Issue Workspaces: A better way to collaborate in Drupal.org issue queues

Interacting directly with the code base on D.O. is something of a challenge for people new to the community: it requires fairly advanced knowledge and can be a big barrier to new folks contributing on issue queues. The Drupal issue queue needs to modernize to mimic best-of-breed code repository tools such as Bitbucket and Github. It’s exciting to see how Drupal.org is evolving to support a more git-friendly workflow.

We need revisions and CRAP everywhere in core

Dick Olsson (Dixon_), maintainer of the Deploy and UUID modules, posits that while content staging will never be in core, it should be easy enough to implement a Create Read Archive Purge model of content workflow. I believe this session will extend his previous sessions from Austin and Amsterdam, focusing on what needs to be done to extend this functionality out from core using contrib modules. This session also has the added benefit of having a related sprint on Friday.

What Panels can teach us about Web Components

Drupal often blurs the line between data and display layers in an application, as anyone who has written a custom theme function or a template file can attest. The Panels module is an effective way to decouple display and data layers. Anyone who has been involved with the Panels module knows its immense power. Therefore, this could be an interesting session to preview potential improvements to Drupal core (which seems to have been unaffected by the recent trend towards Headless Drupal).

atomic-web-design

To the Pattern Lab! Collaboration Using Molecular Design Principles

For the uninitiated, Pattern Lab is a dynamic prototyping system that focuses on breaking down a page into small, self-contained blocks of content. These blocks can be combined into multiple configurations without needing to rebuild everything from scratch. Furthermore, since the prototype is being viewed in the browser, elements are styled using CSS, and the markup can be edited to mimic Drupal’s native markup structure. As a result, the prototype’s style closely imitates the styling of the Drupal site, reducing duplicated effort in the theme creation and prototyping phases. As a bonus, because the system uses the web stack, the site can be designed as responsive from the beginning.

Making Content Strategic before “Content Strategy” Happens

Content Strategy can be defined as the process of planning content so as to maximize its effect for users. I’m excited to hear that people in the community are also interested in creating content that is engaging, compelling, and interesting.

So many sessions, so little time

Below are some interesting simultaneous sessions, so you will have to choose which one you’d rather attend. But fear not, all the sessions should be recorded to view at a later time.

If I were a themer or coder, and a fan of fast demos, I’d go to: 0 to MVP in 40 minutes: Coder and Themer get rich quick in silicon valley. If I were a SysOps aficionado and interested in hearing from a couple of Drupal community heavyweights I’d go to: You Are A Golden God: Automate Your Workflow for Fun and Profit.

drupalcon-portland-group-photo

This next one is a toughie. This time slot is occupied by three sessions.

For the Content Strategy nerd in me, nothing makes me happier than to see the DA taking steps to create model content that helps communicate Drupal’s mission: Content Strategy for Drupal.org. For the front-end, storytelling nerd in me there’s: Styles of Storytelling: Cultivating Compelling Long-form Content. Finally, Steve Persch (stevector) makes the case for extracting Drupal from generating markup for a webpage: Rendering HTML with Drupal: Past, Present and Future.

If I were a junior dev looking to level up into a more senior dev role, I’d attend: De-mystifying client discovery. If you haven’t already been to a Headless Drupal session, or you’re a fan of Amitai Burnstein’s colorful presentation style, go to: Decoupled Drupal: When, Why, and How.

Todd Nienkerk’s talk on company culture was very warmly received at DrupalCon Latin America this spring, and I’m excited to hear this in person: Creating a Culture of Empowerment. Another session I’ve had on my radar ever since Ryan submitted: Routes, controllers and responses: The Basic Lifecycle of a D8 Request. He’s a must-see presenter.

If you’d like to get better acquainted with the D8 plugin system, An Overview of the Drupal 8 Plugin System would be right up your alley. Larry Garfield (Crell) is also presenting a workshop on what shouldn’t be the focus of Drupal core: No.

Clearly, DrupalCon LA will be a really exciting opportunity to grow skills, both as a developer and as a community member. As always I’m looking forward to attending many of these sessions, and also for the opportunity to network and contribute to the success of the Drupal project.

Transforming Enterprises with Drupal 8

As we’ve said before, enabling organizations to transform digitally is at the heart of Phase2’s focus on content, collaboration, and experience. A key element of effective transformation is the combination of adaptability and foresight – in other words, the ability to see what new technologies are coming, understand their potential, and harness their power for your benefit.

In the world of open source CMS solutions, that imminent technology is Drupal 8. Although a long time coming, Drupal 8 is still an unknown quantity for many organizations. The way we see it, companies’ willingness to pick it up and run with it (strategically!) will play a major role in their success in the coming years.

MSK & Drupal 8, A Commitment to Innovation

Last year, Phase2 teamed up with Memorial Sloan Kettering Cancer Center to act as the organization’s Drupal technology partner after they had made the innovative decision to be Drupal 8 pioneers. The MSK team had more than a simple migration in mind: they endeavored to build one of the very first enterprise-scale Drupal 8 sites, despite the fact that D8 only existed in a beta form at the time. This decision reflected the center’s ability to see the big picture and boldly pursue innovation. In everything from patient care to biomedical research, MSK constantly seeks new ways to advance beyond what was previously thought possible, and their attitude towards digital transformation was no different.

placeit

Major Perks of D8

In addition to the power in core, which allowed the team to use less than ten total modules, there were vast improvements in extensibility, testing, templating, and configuration management.

Extensibility

The ability to extend plugins and services is more available in Drupal 8, with the result that instead of struggling to use yet-to-be-ported contrib modules, our team was free to create custom code specifically fitted to MSK’s needs. The idea of inheritance also made custom code easier to manage.

Object-Oriented Programming

One of the trickiest learning curves of Drupal 8 was also the catalyst for a lot of saved time. Object-oriented programming forced us to take a highly structured approach, isolating our business logic into objects. This results in separated pieces which can move forward despite one another. You can run your migration without knowing how things are going to be themed, and you can theme things without knowing how all content will be structured, etc.

d8 and symfony

Testing

The level of testing integrated directly in Drupal 8 core makes it significantly easier to confidently maintain MSK’s site functionality as Drupal 8 continues to evolve. The existence of self-documenting tests, which weren’t available in Drupal 6, was a great positive change for the MSK team.

External Libraries

Drupal 8’s incorporation of Twig accelerated the theming process. In addition to Twig, the inclusion of external libraries (JavaScript, Backbone, PhpUnit, Guzzle, and Symfony’s YAML, Routing, and Dependency Injection libraries just to name a few) created a great framework for our developers to work in.

Don’t Miss the D8 Train

We fully believe Drupal 8 (even as a beta) is a valuable alternative to Drupal 6 and 7, especially for enterprise organizations that can combine the core with extended custom code. What’s more, the community needs more organizations to take the leap to Drupal 8 to facilitate improvements and provide influential feedback to the community. Phase2 and MSK were able to contribute a significant amount of code back to the project. To move Drupal 8 closer to an official release, more organizations need to invest in its creation through projects of this kind – and Drupal vendors need to be ready to support them.

drupal-relay_1

Drupal 8 is a win-win for enterprises and the Drupal community alike. Are you and your organization ready to transform with Drupal 8? Take the first step by attending our DrupalCon session with Memorial Sloan Kettering (or our session on Drupal 8 for enterprise organizations!). You can learn from the challenges we faced and come away with a list of hints, tricks, and best practices for beginning your own Drupal 8 project. In the meantime, stay tuned to our blog for more on our adventures in Drupal 8.