Better Development Through Emotional Intelligence

I am unabashedly an engineer. I obsess over the pursuit of finding the most efficient solution to any problem.

In the realm of open source software, this approach has served me well. We read, reverse-engineer, fork, improve, and share. I want my process to be faster, more flexible, and maintainable for the long haul.

As I’ve investigated different methodologies, one characteristic I constantly underestimate is the team dynamic. I tend to pigeon-hole my mind into thinking that the solution to a problem is the most important goal.

Hey look! There’s a problem! I must find a solution for it.

  • What if complex problems can’t be solved by me?
  • What if, when I suggest a tool or a programming philosophy, it masks the need to dive deeply into other factors?

The more I researched my approach, I came across a concept that is vital to team effectiveness when solving complex problems: emotional intelligence.

Emotional Intelligence, sometimes referred to as EQ (emotional intelligence quotient) to complement IQ (intelligence quotient), is the ability to be aware of, express, control, reason, and interpret emotions appropriately.

Within a team, many, many, many studies have shown that EQ, more than IQ, is the key to solving complex problems.

The team dynamic is engrained with the DNA of open-source projects. Any Drupal issue queue or Packagist library commit log supports that.

The better question I ask myself, however, is:

  • Are the teams I work on the most emotionally intelligent?
  • If not, what am I doing to improve that metric?

Peeling back this onion revealed the societal constructs that affected how I view an effective team.

Typically, I look to the most technical people I know for answers. In some cases, I follow the stereotypical engineer playbook of positing a hypothesis, demanding evidence, and playfully browbeating a decision.

Put another way, how many times have I jokingly used the phrase “You are doing it wrong”? Is that the most effective solution, even when I mean no malice?

As the research suggested, this emotionally oblivious approach was philosophically incongruent with proven science!

  • How could I call myself an engineer?!?!
  • How could I obsess about the pursuit of efficiency and solution, when my own attitude was blunting my team’s effectiveness?

I needed to do better.

I needed to find something, rooted in math and science, that helped me understand how to refactor my way of thinking.

I then learned about perspective and heuristic techniques. Perspective is how one looks at a problem. Heuristic is the mental shortcut one uses to arrive at a solution. Both are shaped by experience and knowledge, but the nuance in process from a variety of individuals is key.

Dr. Scott Page elaborates :

Screen Shot 2014-09-03 at 11.36.31 AM

The diversity of an agent’s problem-solving approach, as embedded in her perspective-heuristic pair, relative to the other problem solvers is an important predictor of her value and may be more relevant than her ability to solve the problem on her own. Thus, even if we were to accept the claim that IQ tests, Scholastic Aptitude Test scores, and college grades predict individual problem-solving ability, they may not be as important in determining a person’s potential contribution as a problem solver as would be measures of how differently that person thinks.

It opened my eyes to how I’ve been going about solving complex problems all wrong.

In the context of a complicated problem, there is a higher likelihood of finding a global optimum (the best solution) when you have a diverse set of team members with local optimum (their best solution). Put simply, I needed to engage more (not less) with people who were different than me.

In essence, given the right conditions, diversity trumps ability!

What’s interesting about this research, however, is the fact that communication among members with different perspectives is very difficult.

In fact, as Dr. Page continues:

Screen Shot 2014-09-03 at 11.36.31 AM

Problem solvers with nearly identical perspectives but diverse heuristics should communicate with one another easily. But problem solvers with diverse perspectives may have trouble understanding solutions identified by other agents.

Thus, we’ve come full circle to why EQ is so important.

If team members are not in-tune with each other, the benefits gained from their diversity can be lost. It is vital, therefore, in my unabashed obsession to being an engineer, that I not only need to improve my EQ, but surround myself with colleagues who have a high EQ and learn from them.

So what are the characteristics of high EQ individuals? Statistically, who has high EQ?

Some of our thought leaders here at Phase2 have answered that question.

If you’re interested in learning more, find me as I share my ideas on a building a more inclusive community at various conferences and camps!

Profiling Drupal Performance with PHPStorm and Xdebug

Profiling is about measuring the performance of PHP code, at least when we are talking about Drupal and Xdebug. You might need to profile your site or app if you work at a firm where performance is highly scrutinized, or if you are having problems getting a migration to complete. Whatever the reason, if you have been tasked with analyzing the performance of your Drupal codebase, profiling is one great way of doing so. Note that Xdebug’s profiler does not track memory usage. If you want to know more about memory performance tracking you should check out Xdebug’s execution trace features.

Alright then lets get started! 

Whoa there cowboy! First you need to know that the act of profiling your code is itself taking resources to accomplish. The more work your code does, the more information that the profiler stores; file sizes for these logs can get very big very quickly. You have been warned. To get going with profiling Drupal in PHPStorm and Xdebug you need:

To setup your environment, edit your php.ini file and add the following lines:

Depending on what you are testing and how, you may want to adjust the settings for your site. For instance, if you are using Drush to run a migration, you can’t start the profiler on-demand, and that affects the profiler_trigger setting. For my dev site I used the php.ini config you see above and simply added a URL parameter “XDEBUG_PROFILE=on” to my site’s url; this starts Xdebug profiling from the browser.

To give you an idea of what is possible, lets profile the work required to view a simple Drupal node. To profile the node view I visited http://profiler.loc/node/48581?XDEBUG_PROFILE=on in my browser. I didn’t see any flashing lights or hear bells and whistles, but I should have a binary file that PHPStorm can inspect, located in the path I setup in my php.ini profiler_output_dir directive.


Finally lets look at all of our hard work! In PHPStorm navigate to Tools->Analyze Xdebug Profile Snapshot. Browse to your profiler output directory and you should see at least one cachgrind.out.%p file (%p refers to the process id the script used). Open the file with the largest process id appended to the end of the filename.

PHPStorm Cachgrind Picker

We are then greeted with a new tab showing the results of the profiler.

PHPStorm Xdebug Profiler

The output shows us the functions called, how many times they were called, and the amount of execution time each function took. Additionally, you can see the hierarchy of all function calls and follow potential bottlenecks down to their roots.

There you have it! Go wild and profile all the things! Just kidding, don’t do that.

How and Why to Patch a Drupal Module

Recently, at Drupal Camp Costa Rica, I was pleased to discuss what I feel is a very important, and very fundamental, Drupal technique. Patching code is something many of us have done, but the Drupal community has evolved standards around when, why, and how it’s appropriate to do so. I’d like to run through some of the highlights of that talk here.

What’s a patch?

Let’s start at the beginning. What do I mean by, “patch a module”? A patch is a small text file that contains a list of all the differences between the code as published (usually on and the code as you wish it to run on your site. You “apply” it to the original code, and the changes are made so that you can use the code the way you want it.

Why would I do that?

drupalkittenIn the Drupal community, we DON’T HACK CORE! This is a commonly held tenet of Drupal development, and there are very good reasons not to hack core, or even contrib modules:

  1. Forward compatibility: If new features or changes, or most importantly, security releases, are made to the module, you can’t take advantage of them without losing your changes.
  2. Developer friendliness: If your changes introduce a bug down the road, other developers will not look in that module, because they will assume it hasn’t been changed. This will cost them time and frustration.

What’s the difference?

What is the difference between a patch and a hack? Method.

When I say I “hack” a module, I mean that I am changing the module code directly, putting it straight into my site repo or on my site server, and running it. Changes like this are usually pretty invisible to other developers.

When I say I “patch” a module, it means that the changes that I’ve made are in a separate text file, which is applied to the module by when the site is built. These changes are also easily accessed and reviewed by other developers.

This tiny methodology difference means a great deal in actual practice.  A module that’s been hacked is very difficult to use in the long term. Changes made to it are often not recorded anywhere (or anywhere anyone would look), and if the module is replaced, by say a new or updated version, then those changes are lost forever.

It’s ok to patch core or contrib, just don’t hack it!

When would I patch a module?

  • You’ve found a module that does most of what you need… but not quite everything.
  • You’ve found a bug in the module.
  • You need to integrate custom functionality into the module, but it doesn’t have the right API functions.
  • You need a change right now and the module maintainer isn’t responding.

When would I not patch a module?

  • The module provides hooks or alter functions that will allow you to do what you need.
  • The module only does a little of what you need, and you probably can build a custom module for the same effort.
  • The dev version of the module has what you need, or there’s already a patch in the issue queue.

Please note this last point – work smarter! Spend some time before you get to coding to make sure that someone else hasn’t already done the work for you! That’s open source, folks – use it!

So, how do I do it?

Step 1: Check out the module

checkitoutThe easiest way to generate a patch file is by using git; this means you need to check out the module. See the screenshot here for instructions on checking out a Drupal module. Note, if you are running a development site, you can check this module out into your sites/all/modules directory; git is smart enough to handle it even if your full site directory is a git repo, too. (Though I don’t recommend that methodology, and you’ll see why later.)

Be sure to check out the dev version of the module; it’s going to be the most up-to-date, and module maintainers want to have your code apply to the latest version of what they’re working on.

Step 2: Hack away!

This is where you have the opportunity to make and test the changes you need to make in the code.

Step 3: Make the Patch

There’s two ways of doing this: on the command line, or via a GUI.

Command line is actually pretty easy. Once you’ve made your changes in the checked-out module directory, you run this simple command:

This will put all modified files into the patch. As with most command line tools, there are complicated options to only include certain files, or to compare different directories, etc. However, I personally favor a different way…

Sourcetree. It’s no secret that we here at Phase2 are huge Atlassian fans. This is another product by them, a GUI interface for management of your repositories, and it is awesome. It’s also free. It can manage your repos, help you keep them organized, and provides a visual interface for branching, git flow, and – of course – generating patches. Personally, while I can work with git on the command line, I don’t.  Because I use this instead.

Did I mention it’s free? (Disclaimer: I didn’t get anything for this recommendation, except a free copy of Sourcetree).

Congratulations! You’ve written a patch for a module! Now what?

Step 4: Submit your work

Issue QueueCreate a new issue in the issue queue of the module you’re working on. Fill out the form, but don’t attach your file just yet.  You’ll need to rename it, and the Drupal community has a specific formula for doing so.

[module name] – [short description] – [issue number] – [comment number] . patch

That’s the module name, followed by a one to two word description of what the patch does, followed by the nid of the issue queue node, and the number of the comment, dot patch.  Let me show you where to find those numbers.

find issue numberfind comment number The issue number is the nid of the issue node, and can be found in the url.

The comment number is not the cid of a comment; it’s the visual display number of the comment – see the second screenshot on the right. However, it’s the number of the comment you’re about to add, so when you rename your patch, rename it the number of the last comment, plus one.

So, let’s make an example.  For the module patch_demo, you need to add an additional field to the database table for “job roles”. The nid of the issue node is 2313551.  There are currently two comments on that thread. So, you could title your patch:

Now, make that comment and attach your patch!  Be sure to set the issue “Status” to “Needs Review”, as this will trigger to automatically inspect and attempt to apply the patch.

Step 5: Bring it all together

Drush make is an amazing tool. It allows you to specify modules, themes, libraries, site structure… everything for building a site. While the use of it is pretty involved, and beyond the scope of this tutorial, I will touch on one thing – the ability to add a patch to your site build – automatically.

make file

This is a very basic drush .make file.  It tells drush to download drupal core, the features module, and the patch_demo module from my sandbox.  The last line, outlined in orange, tells drush to grab that patch file from the drupal file server, and apply it to the patch_demo module. Note that part of the array specification includes the issue queue nid – this is important, as it allows future developers to easily find that issue queue and see why you are patching the module.  The naming convention on the patch file itself tells future devs which patch to look for, and in what comment.

Running drush make on this make file will download drupal, the features module, and my patch_demo module. It will assemble the site, then apply the patch to the patch_demo module, making the changes we specified much earlier in the process.

But whhhhhhhhy???

Since the patch only needs to be accessible to drush via the web, it could be anywhere – like, on a file server, or your company’s web site download directory, or Github, or Dropbox. Why contribute in the Drupal issue queue?

Because each of you, no matter how new, or inexperienced, or totally ninja, can help make Drupal better. Communities work best when everyone contributes. Your patch may not be accepted, or it may, or it may spark a discussion that leads to a better way of doing what you need. This is good for Drupal, which means in the long run it’s good for you too!

Also – even it it isn’t accepted, it’s in the issue queue file system, so it never goes away – meaning you can continue to use it. I’ve submitted many patches I knew would never get accepted, because I needed something to work in a particular unique way. That’s OK.

Finally, on a more personal note, contributing in the issue queues helps get you known in the Drupal community. This is great for your career.

Help make Drupal better. Save the kittens. Get yourself a better job. Contribute!


Here are a number of really good resources I drew on to write this presentation and tutorial.

Last word: Thanks to the Drupal Camp Costa Rica team for giving me time to present such an important topic to their devs.  ¡Pura Vida!

Introducing OpenPublic’s App Strategy – Reimagining Content Management For Public Sector

Since it was first developed, OpenPublic has redefined open source public sector content management. Packaging government-focused functionality into a secure Drupal distribution, once a radical notion, is now an established open source web solution. As the foundation of many of Phase2’s public sector site builds,

OpenPublic has demonstrated the importance of solutions that are replicable, that can prevent duplication of services, and provide examples of repeatable best practices. OpenPublic serves as accelerator for building good government web sites and it contains best practices and features around mobility, security and accessibility.

With the release of OpenPublic candidate 1, we’re simplifying  content management in the enterprise public sector space by appifying all functionality and content types in OpenPublic. What was once a wilderness of modules and configuration will be encapsulated in a clean collection of Apps, making all OpenPublic’s out-of-the-box functionality simple to configure by non-technical site administrators. This new App strategy will make it easier and cheaper for governments to implement the web functionality they need.

So, what is an App?

It can be confusing to pin down definitions for terms like: modules, features, distributions, and Apps. An App is simply a collection of code (modules, features or themes) that delivers a distinct piece of functionality upon installation. Like the “apps” on your smartphone or tablet, Drupal Apps provide functionality for a specific purpose – quickly, easily, and independently.

In 2011, Jeff Walpole introduced the concept of Apps for Drupal distributions and the new Apps module. Apps improve usability for site administrators, particularly as compared to the traditional Drupal Configuration dashboard. From the beginning, Apps added extensibility and interoperability to Drupal. Now, instead of adding Apps for extensibility, we’re appifying all distribution functionality for OpenPublic 1.0, finally giving the content administrators full configuration control.

Appification of OpenPublic For State and Local Government

One Code Base, Many Departments

OpenPublic  Apps provide city, county, and state government agencies the ability to turn on and off independent pieces of functionality without affecting any other functionality on their platform. Many public sector agencies require a unified CMS spanning all departments and agencies. OpenPublic provides this through standard Apps developed for government needs, including directory management, stringent security, and editorial workflow. However, the flexibility of OpenPublic 1.0’s Apps also allows for specific functionality by department. This means that trying out new functionality is as easy as turning an App on or off, giving governments the opportunity to test innovative approaches without heavy risk or cost of implementation. See how San Mateo County uses OpenPublic Apps.


Simplified Configuration

Apps take Drupal development out of the equation, empowering site administrators to skip technical development when configuring individual department sites. Each App is self contained, so changing the configuration does not cause problems with other site features.

 Custom Functionality Made Easy

With OpenPublic, users can develop Apps specific to their objectives. San Mateo County, Calif., for instance, used OpenPublic to develop an App which adds custom location mapping to any page on the web platform. Once created, the San mateo County team was able test their new App in one department, then enable it for other departments when it was deemed successful. The sky’s the limit with OpenPublic’s new App structure with unique and flexible functionality for public sector platforms.

OpenPublic  is breaking new ground in web experience for public sector site administrators and visitors alike. With the Appification of all functionality in OpenPublic, we are knocking down traditional barriers to Drupal site maintenance and scalability with intuitive configuration. Stop by Experience Director Shawn Mole’s Capital Camp session with Greg Wilson, Director of Public Sector Practice, to learn more about how OpenPublic  is truly the next generation of open source government sites.


Talking Mapping at the 2014 ESIP Summer Meeting

Last week I had the opportunity to present at the Federation of Earth Science Information Partners (ESIP) Summer Meeting held in Copper Mountain, CO. The Summer Meeting is a gathering of IT professionals from across several different agencies such as NASA, NOAA and USGS. Each year, the group comes together to talk about the challenges that they each face while trying to engage and support the scientific community.

When I got in on Wednesday a few of us got together to talk about how to kickstart the Science on Drupal group. While there’s been a science presence in the Drupal community for several years now in one form or another, there’s been a recent interest in pooling resources together to make a larger group. We had a great time strategizing how to grow the group over chips and salsa.

For my presentation, I went over various different tools for doing online mapping work, both with native Drupal tools and other toolsets.

map of tonado

One of the big challenges that this community has to face is how to work with large datasets that don’t fit neatly into a typical Drupal site. For my part, we spent a lot of time going over how to leverage tools like D3, CartoDB, GeoServer, and Mapbox to connect to data outside of Drupal and provide meaningful interaction with it.

They also exposed me to DEIMS, a Drupal distribution that they had collaborated on that also features some interesting ways to interact with external data. There was a great presentation at Drupalcon Austin on the distribution that’s definitely worth checking out.

If you’re interested in catching the presentation, the slides are posted on Github and the video is here. If you’re interested in catching up with what’s going on with the Drupal in Science working group, check out their page on

Thanks again to Adam Shepherd and the rest of the ESIP Drupal Working Group for inviting me out to hang out and learn from their experiences.

It’s Almost Time for Capital Camp and Drupal Gov Days!

We could not be more excited that two of our favorite DC events – Capital Camp and Drupal 4 Gov – are merging! The combined event, happening July 30th through August 1st at the National Institutes of Health, promises to be one of the most informative and inspiring conferences on Drupal and  open source in government yet. Phase2 is proud to be a platinum sponsor of what is sure to be an action-packed conference in the nation’s capital, our hometown.

Screen Shot 2014-07-09 at 1.28.42 PM

We’ve lined up 10 of our all-stars to present sessions at this year’s event (we told you we were excited!). Whether you’re interested in design, collaboration, or custom government solutions, we’ve got you covered. Here’s a sneak peek at our speaker roster…

content management solutions for government

Kick off your Capital Camp experience with a case study on How San Mateo County Is Raising the Bar with OpenPublic. Experience Director Shawn Mole and Program Director Felicia Haynes will discuss the technical challenges that San Mateo faced as a local government, and how they utilized Phase2’s Drupal distribution to overcome those obstacles. For more details on OpenPublic, catch OpenPublic 1.0: The Next Generation of Open Source Government Sites, presented by Shawn Mole and Greg Wilson, Director of Public Sector Practice at Phase2. Then learn how to create a “Sleep at Night CMS” with Senior Developer Randall Knutson.

 design and user experience

Necessary Capital Camp preparation: put these three sessions from Phase2’s front-end masterminds on your agenda. Start with Senior Developer Mason Wendell, who knows that great design, like jazz, needs both harmony and discord. His session, Thinking Inside the Box X3, will focus on component-driven design. Senior Designer Joey Groh will expose a real projects’ collaborative design process in his session, Collaborative Design to the Rescue: Photoshop in a post-Photoshop World. Finally, in his talk Amazing Design Through Empathy, Senior Experience Analyst David Spira will illustrate how to use empathy to improve all aspects of product design, from requirements gathering to user research and everything in between.



Drupal has already proven to be a viable alternative to proprietary models for government CMS. Now Open Atrium is helping Drupal provide government agencies with an enterprise grade, open source platform to connect teams, departments and constituents. Learn from Greg Wilson and Mike Potter, Open Atrium’s Lead Architect, how OA2 addresses government collaboration needs in their talk, Open Source Collaboration for Government with Open Atrium. For a story of true open source collaboration and innovation, check out Director of Engineering Steven Merrill’s session on OpenShift and Drupal.

configuration, testing and site building

In recent years, Open Data has evolved from a buzzword to a reality to a requirement for governments, NPOs and NGOs globally. To explore what Open Data is, how to use it, and what it means to your organization’s website and its followers, stop by Senior Developer Robert Bates’ session, Open Data: Not Just a Buzzword. For more advanced developers, Steven Merrill will present on Open Source Logging and Metrics Tools, in which he will dive into the logging infrastructure of and how you can apply the same tooling to your own sites. Finally, learn Best Practices for Development, Deployment, and Distributions from Mike Potter.

Be sure to visit our exhibitor booth to learn more about Phase2 and our people,  and of course  to grab some infamous Phase2 swag!  Are you attending Capital Camp and Drupal Days? What sessions are on your must-see list? Let us know below!

Open Atrium: The Open Source Enterprise Collaboration Solution for Government

In today’s digital world, government agencies are faced with the challenge of determining how best to connect not only internally, but externally with citizens and shareholders. At the federal, state, and municipal levels, agencies coordinate and share information with other agencies, with external communities, and across different levels of government.


As they attempt this coordination, government agencies must meet the collaboration needs of specialized projects, while operating within budget allocations and resource constraints.

“Challenge” may be an understatement. Luckily, a robust and secure toolset exists to successfully connect disparate government systems and constituents.

On Tuesday, June 24th we will be leading a discussion about collaboration in government and how Open Atrium can provide an open source enterprise solution to connect actors and engage citizens in the public sector.


Open Atrium provides an open source, enterprise-grade collaboration platform for government that:

  • Engages constituents with a modern, mobile-friendly experience

  • Streamlines communication and workflows across groups

  • Integrates with enterprise systems

  • Provides the security that allows agencies to restrict access to information on both sides of the government firewall

Mike Potter, Open Atrium’s Lead Architect, and Greg Wilson, Director of Government Practice at Phase2, will discuss some of Open Atrium’s key features that make it a great fit for collaboration in government, including:

  • Secure document sharing and collaboration:  Securely keeps information in one place, unlike email.

  • Time management tools: Manage and monitor project activities with calendars and project tracking tools.

  • Security and access control: A robust access control system that outperforms any other open source solution.

  • Online communities and communication: Launching open or private discussions is simple.

  • Citizen and stakeholder engagement: Control the content the public sees.

Be sure to grab a seat at tomorrow’s webinar on Tuesday, June 24th at 12 PM EST to learn what Open Atrium can do for your agency!


San Mateo County- Raising The Bar For Local Government With OpenPublic

We recently celebrated the launch of San Mateo County‘s new and improved web platform using Drupal and OpenPublic. The San Mateo County digital team had a vision of a new web platform that would accommodate all departments with intuitive administrative functions as well as a well designed end user interface to efficiently deliver information to the County’s citizens. We new we could deliver this with our Drupal distribution OpenPublic.  OpenPublic is an open source distribution built with Drupal, developed to address commonly recurring challenges faced by Government agencies when managing their web content.  With the successful launch of the San Mateo County platform, I finally got the chance to sit down with Beverly Thames, Content and Collaboration Manager at San Mateo County to chat about the web platform redesign and OpenPublic, here is out Q&A:

Q: What was is the goal or mission of the San Mateo County digital presence and why did you need a redesign?

A: Between departments and central IT, the website was a multi-million dollar per year enterprise, and yet, County leaders were dissatisfied. The site failed to meet their needs or the needs of the public. Two goals of the redesign were to lower costs and provide better service to our departments and site visitors. We wanted to improve communications and to reduce or streamline in-person office visits.

We had major challenges with our old proprietary content management system’s inflexibility and highly technical nature. This made it difficult for departments to produce content and to organize that content in a meaningful way. The result was that much of the content grew stale while vital “evergreen” and fresh content was difficult for visitors to find via search or the site’s menus. Beyond these functional shortcomings, the frustrating user experience on the CMS was made worse by the outdated visual design, which displayed poorly in most modern browsers–especially when viewed on mobile devices. Overall, the sites did not communicate the San Mateo County or department brand very well, a situation compounded by the sites’ outdated content features. It was time for a change.

Q: What were the challenges and needs for San Mateo County’s digital presence and how did OpenPublic address them?

A: We chose to build our site on OpenPublic because it is tailored to the needs of government. County leadership could only be convinced to adopt open source if they were assured the system was secure and accessible. OpenPublic delivered both.

Each department’s identity and requirements are tied to their lines of business and the communities they serve. The County departments wanted to maintain their unique identities within the overall County brand. OpenPublic allowed the County to maintain a strong central brand while meeting user demand for autonomy and flexibility.

Q: Was using an open source platform an important factor in choosing a new content management system? If so, why?

A: San Mateo County was spending too much on annual licensing fees for a proprietary content management system that no one liked. There was little user support, and few developers knew the CMS, so when you could find one, their rates were high. Open source comes with a global community of support and many talented developers.

Open source is a natural solution for the government sector because we are constantly sharing our work with our peers and the public. Adopting an open source CMS allows the County to benefit from and contribute to the continuous improvement of the platform within the context of a larger user community.

Q: What is next for San Mateo’s digital presence?

A: Now that the site has been up for a few months, we will start digging into the analytics to see where we need to tweak search and to help us identify topics for curated pages.  We’re also excited about exploring potential integrations with our Open Data portal, GIS mapping and enterprise content management.


Learn more about San Mateo County’s new Drupal platform in our Portfolio, and get a deep dive look at how we developed an improved search functionality to better serve the County’s constituents.  If you are heading to DrupalCon Austin next week, don’t miss our session “Drupal For Gov- Raising The Bar With OpenPublic.”


Workflow within Open Atrium 2

atrium-logo (1)

A key requirement in most organizations is a content approval workflow.  The typical Drupal solution uses the Workbench Moderation module. However, the Workbench Moderation module only allows you to create a single site-wide workflow.  What if you need different workflows in different Open Atrium spaces, or between different content types?  The solution is the new Open Atrium Workbench module!

The Open Atrium Workbench module, along with several dependent modules, was a collaboration between Phase2 (srjosh and myself) and the Community (dsnopek).  It allows you to define multiple workflow “profiles” and apply them to content types within Open Atrium Spaces and Sections.  It also allows you to specify the Groups and/or Teams who are allowed to moderate content through the workflow.

Workbench Moderation Profiles

OA2_Workflow-3A workflow “profile” is a collection of States and Transitions.  A “State” represents where in the workflow a specific piece of content is, such as “Draft”, “Needs Review”, “Published”, etc.  A “Transition” is the act of moving between two states. Typically a Transition can only be made by a specific set of users with proper permissions.  For example, only members of a specific Working Group might be allowed to approve content, or only members of the Marketing team allowed to Publish content.

The Workbench Moderation module allows you to create these States and Transitions, but applies them globally across the entire site.  The new Workbench Moderation Profile module (currently a sandbox) creates a new entity-type for storing workflow profiles.  The States and Transitions are still created globally by Workbench Moderation, but the actual collection of these being used for a specific workflow are controlled by the profile entity.

Workbench Moderation Profiles is very generic and doesn’t care exactly how these profiles are applied.  Submodules to support Organic Groups and Content Types are provided.  Hooks are available for further control.  The Open Atrium Workbench module uses these hooks to provide Space-specific and Section-specific workflow profiles.

Turning it all on

To add workflow to Open Atrium, download and install the Open Atrium Workbench module and all it’s dependencies.  There are several patches needed to Workbench Moderation documented in the oa_workbench.make file.  Eventually these will get committed to Workbench Moderation and much of the Profile sandbox will be incorporated directly.

Once all of the modules are enabled, the various Drupal permissions and Organic Groups (OG) permissions need to be set.  Permissions in Workbench Moderation are a bit backwards from the norm:  submodules *revoke* access rather than *grant* access.  Typically this means you will set the Drupal Workbench Moderation permissions (including View Unpublished content) to be allowed for all authenticated users, then use OG permissions to restrict to Members and/or Space Admins, then optionally use OA permissions to restrict to Groups/Teams.

OA2 Workflow ContentAfter enabling permissions, you next need to enable Moderation of Revisions on the specific content types that you want to use workflow.  When using Open Atrium, enabling Moderation on a content type will not turn on the workflow features until the Space itself enables a Profile.  Typically this means you will enable Moderation on many content types, such as Document Pages, Events, etc.  In most cases, moderation will *not* be used on Discussion Posts since those are usually ad-hoc discussions that do not require approvals.


Finally, the last step it to create your custom Workbench Profile containing the transitions you wish to include, then enable this workflow Profile for your Space or Section.  For Spaces, the workflow profile is set within the Config/Workbench Moderation page.  For Sections, the workflow profile is set by editing the Section node and setting the profile field.

To limit transitions to Groups or Teams, enable the oa_workbench_access module (included with oa_workbench).  To allow transitions to be scheduled automatically in the future, download and enable the Workbench Moderation Scheduled Transitions module (also a sandbox).

How it works

Once Open Atrium Workbench is configured, creating new content within a Section will display the normal Workbench Moderation messages panel.  This panel provides information about what State the document is in and allows options for moving to a different state if you are approved to make that transition.

OA2_Workflow_messages-3For example, a Member creates a new content Draft.  Once they are happy with the draft, they move it to the “Needs Review” state.  Somebody authorized to review the document visits their “My Workbench” page from their User Badge dropdown menu and goes to the “Needs Review” tab to see all of the content awaiting their approval.  After reviewing the document, they can either Reject the document, sending it back to the Draft state, or they can approve the document, sending it to the Published state.  Only the users authorized to publish the document will see the Published option in the workbench panel.

Once content is published, a new draft can still be made.  Workbench Moderation supports having one revision of content published while a different revision is in the draft state.  The new draft will only replace the currently published revision once it is approved and published via the workflow.

See you at DrupalCon Austin!

For more detailed information on using Open Atrium Workbench, watch my hands-on demo webinar.  If you are coming to DrupalCon Austin, stop by the Phase2 booth for a demo, or schedule a demo to discussion your specific organizational needs.  Or just come to our booth and say “Hi” and tell me about the cool and interesting ways you are using Open Atrium 2 in your organization.

While I’ve been using the default publishing approval workflow as my example, each organization has different workflow needs.  The workflow profile used for publishing documents is quite different from the workflow used to manage tasks or issues.  The workflow used in a private section (if any) is likely different than the workflow used in a public section.  Open Atrium supports all of these different cases in a systematic and easy-to-use way, consistent with users familiar with the Workbench Moderation module.  This functionality makes Open Atrium a key solution in the Enterprise Collaboration space on par with many non-Open-Source systems.

Combining Tasks with Grunt

I was recently asked to help out with a few build steps for a Drupal project using Grunt as its build system. The project’s Gruntfile.js has a drush:make task that utilizes the grunt-drush package to run Drush make. This task in included in a file under the tasks directory in the main repository.


You can see that the task contains a few instances of variable interpolation, such as <%= config.srcPaths.make %>. By convention, the values of these variables go in a file called Gruntconfig.json and are set using the grunt.initConfigmethod. In addition, the configuration for the default task lives in a file called Gruntfile.js. I have put trimmed examples of each below.



As you can see, the project’s Gruntfile.js also has a clean:default task to remove the built site and a mkdir:inittask to make the build/html directory, and the three tasks are combined with grunt.registerTask to make the default task which will be run when you invoke grunt with no arguments.

A small change

In Phase2′s build setup using Phing we have a task that will run drush make when the Makefile’s modified time is newer than the built site. This allows a user to invoke the build tool and only spend the time doing a drush make if the Makefile has indeed changed.

The setup needed to do this in Phing is configured in XML: if an index.php file exists and it is newer than the Makefile, don’t run drush make. Otherwise, delete the built site and run drush make. The necessary configuration to do this in a Phing build.xml is below.


You’ll note that Phing also uses variable interpolation. The syntax, ${html}, is similar to regular PHP string interpolation. By convention, parameters for a Phing build live in a file.

A newer grunt

The grunt-newer plugin appears to be the proper way to handle this. It creates a new task prefixed with newer: to any other defined tasks. If your task has a src and dest parameter, it will check that src is newer than dest before running the task.

In my first quick testing, I added a spurious src parameter to the drush:make task and then invoked the newer:drush:maketask.

That modification worked properly in concert with grunt-newer (and the drush task from grunt-drush task didn’t complain about the extra src parameter,) but I still also needed to conditionally run the clean:default and mkdir:init only if the Makefile was newer than the built site.

Synchronized grunting

The answer turned out to be to create a composite task using grunt.registerTask and that combined the three tasks existing tasks and then use the grunt-newerversion of that task. The solution looked much like the following.


I could then invoke newer:drushmake:default in my Gruntfile.js and only delete and rebuild the site when there were changes to the Makefile.

Learn more about build systems in Adam Ross’s blog post “Creating Options in Automated Software Deployment.”