Last week, I made the journey from Washington, D.C., to our new Portland office, or the “Front End Development Capital of Phase2,” as I like to call it. It was an incredibly enjoyable and productive trip, including everything from hanging out with the awesome West Coast Phase2 crew to discussing our vision for growth in Portland over the coming months and years.
While in Portland, I had the pleasure of sharing some great local beer and stimulating conversation at Metal Toad Media headquarters. During an hour-long podcast that provoked many insightful comments and questions, our host Joaquin Lippincott, Metal Toad’s president, led me, Jacob Redding, and David Bellous through a discussion on the merits of open source vs. closed source technologies.
Joaquin got the ball rolling with the aggressive statement that all closed source technology should be torn down and rebuilt with an open source counterpart. Despite each participant’s shared passion for open source, David was cautious about agreeing to such a sweeping declaration, arguing that with enough time and money it is possible to solve any problem with any technology. He placed greater emphasis on finding the tool best suited to a company’s unique cultural ecosystem and specific technical objectives. My take: I fully agree that each situation requires a creative approach to selecting the right tool for the job, taking into account culture, budget, and goals. In my opinion, there are increasingly less times today when a closed source solution is the right fit.
The discussion turned to the issue of sunk costs and opportunity costs. In the context of what Joaquin deemed the “unmitigated disaster” of Oregon’s online healthcare exchange, which was implemented by Oracle, we debated the increasingly outdated perspective that paying more for software automatically translates to better results. I used Microsoft’s collaborative software SharePoint as an example of a widely purchased product with mediocre if not downright poor customer satisfaction. However, when a massive investment is made (like Oregon’s $132 million in CoverOregon), it is difficult for a government or business to cut its losses and start over, no matter how much additional money it stands to lose in the long run.
Innovation, and the process by which open and closed organizations arrive at it, was another fascinating topic of conversation. Jacob pointed out that while Oracle’s annual investment of $5 million in research and development is easy to grasp, the constant aggregated innovation produced by the open source community is a less tangible concept – but no less meaningful. Most of us acknowledged the merits of a defined road map for innovation, which is usually more visible from proprietary companies, but agreed that collaborative methods of development are being embraced by closed and open source organizations alike. It’s a trend that is really exciting for me personally: more companies are losing their “F.U.D.” (fear, uncertainty, and doubt) about releasing long-kept “secret” code and embracing the clear advantages of open collaboration.
Although Joaquin, Jacob, David, and I covered many other issues, including the advantages of open methodology and the importance of properly executed implementation regardless of the technology, I feel that we’ve only covered the tip of the iceberg in the open vs. closed debate. Altogether it was an extremely enjoyable afternoon (beers included!). Check out the recording at Metal Toad Media, and let me know your thoughts on our discussion. I’d love to hear others weigh in and continue the debate here!
Drupal 8 is bringing some great new features in addition to some fun DX changes. One of the ways I like to learn about these changes is to deconstruct the API.
The best way to deconstruct the API is to dive into code that has a certain purpose, like looking at the Breadcrumb API.
Since we know we’re focusing on Drupal 7 to Drupal 8 changes, we can also use the excellent documentation in the change records to help us.
In my upcoming NYCCamp presentation, I’ll review some of the common API functions we used in Drupal 7 and how they’ve changed in Drupal 8.
What Node Am I On?
A lot of custom blocks that show related content, connected taxonomy, or any other relationship to currently viewed page typically depend on menu_get_object(). I’m sad to say that our old friend is gone.
In Drupal 8, the way to get details about nodes are through the attributes of the request object in the global \Drupal namespace.
When building a small or simple website with Drupal, most of the time the end-user interacts with Drupal directly. We as Drupal developers start off with these sorts of projects where the application we develop (replete with custom modules, a theme full of templates, and content we’ve migrated) is the end result that every Web user sees.
Over time our projects grow in size and complexity, and our skillsets along with them. Our Drupal instance ceases to be the sole recipient of our technical labours, and users must interact with the CMS in different ways. It is now when we are reminded that the CMS we are building is not actually the “thing” that Web users are directly looking at; Drupal is not our Web site.
The real trick is developing a Drupal instance that doesn’t contain all the elements of your Web site while making all the parts and fragments interact as if they were all in the same CMS.
Performance and Scale
As traffic to our site grows, we begin to scale our infrastructure. We’re already caching (page cache, Memcache, etc.) but that’s still not enough; so we place a reverse-proxy server in front to handle the unauthenticated traffic. This allows us to render pages one time and deliver that cached version to many users. Thus, load on the Web server is reduced, and our Drupal instance stops crying for a little longer.
But even that may not be enough. Eventually your projects move to load balanced environments and you face the complexities of file management spread across multiple servers and file systems. Then you’ll front-end your CMS with a content delivery network (CDN) and offload much the ancillary assets to external systems. With a good CDN in place, you begin to peel off fragments of your pages and host them elsewhere and let edge-side includes (ESI) assemble the page together on the Edge.
The more your site is splintered across multiple servers, the more you have to consider how you process content and files. Running batch processes means that some batch requests will not be handled by the same Web server that began the process (which may or may not have access to all the same temporary files or content).
Blurring the Lines Between Static and Dynamic Pages
One of the many things Drupal provides is dynamically generated pages. Your content changes over time and you want the pages of your Web site to reflect these them quickly. Dynamic page generation comes at a cost, and since your content isn’t constantly changing, you’ll want to cache many of your pages. This effectively means your Web page is noticeably stale; your fresh content is in the database waiting to be seen, but no one can until cache is cleared or invalidated.
What about content that needs to change in real time? User comments provide a new level of customer engagement and are generated much more frequently than the rest of the page’s content.
There are ways to serve cached content to authenticated Drupal users. Using ESI, you can serve cached content with Edge-side include tags which are parsed by the CDN. These tags can reference un-themed HTML fragments that your CMS generates. The “Welcome Username” in the header of your page need not be part of your actual Drupal theme directly. In fact, the bulk of your page’s content assumes the user is unauthenticated while the user still receives a customized experience.
Web sockets are another means to achieving real-time content updates while still caching the rest of your site. Live scoring or event updates need to reflect the actual progress of the game, so offload the updates to your users’ browsers. In this scenario, you serve a statically cached page with a placeholder for scores (or any other rapidly changing content). Their browser opens a web socket to an API somewhere (outside of Drupal) and updates are pushed directly to the browser and updated on the page without the need for refreshing or time-based polling.
In it’s third year, NYC Camp has grown to be more than another Drupal camp, but an institution for inclusive and supportive Open Source community contribution and knowledge sharing. This year NYC Camp will be held at the United Nations, with an emphasis on women in Drupal and tech. We are excited to be part of this growing event, and can’t wait to participate throughout the camp. Here is where you can find us at NYC Camp!
NYC Camp is kicking off with a day of free trainings. Our own Mike Potter, lead Open Atrium Architect will be leading a free Open Atrium training, covering site set up, customization, and learning to use and configure popular Drupal modules such as Views, Panels, Media, and Organic Groups. Space is filling up fast! Sign up today! Steven Merrill will also be joining Diane Mueller, community manager at OpenShift to co-lead a Drupal on Openshift training. Be sure to register for this exciting event!
We are excited to be the Devops Summit sponsor this year. One of our own Phase2 Devops visionaries, Steven Merrill has been a key player in developing the NYC Camp Devops summit content over the years. He will be leading the summit again this year, gathering some great content for this day long summit covering everything DevOps, NoOps, continuous delivery/integration, and everything in between – delving into all of the latest technology and trends, with real, tailored and informative content. Sign up for the Devops summit today!
We are also excited to participate in the Nonprofit summit again this year. Molly Byrnes is helping to lead this summit and develop a great schedule of case studies, panel discussions and breakout sessions to facilitate conversation around nonprofits and common tech challenges and solutions. Get plugged into the nonprofit tech community and sign up here.
NYC Camp’s Saturday is filled with a robust session lineup designed for all levels of Drupal and tech knowledge. Find your favorite Phase2 thought leaders presenting sessions:
Open Atrium 2 supports many different patterns for your site’s Information Architecture. Through the use of Spaces, sub-Spaces, and Sections you can easily create a hierarchy of content within your Intranet. However, it can be tedious to create every new section within every new space. The new 2.15 release of Open Atrium adds “Blueprints” which will allow you to clone an existing space structure and automatically create the necessary sections!
Creating a Blueprint
To create a Blueprint, simply create a sample space that contains all of the sections and content that you want to bundle. Let’s create a Blueprint for a basic “Project Space”. In this example, we want our Project Space to contain an Event Calendar, a Discussion Forum, a Document section, and a Task list.
Create a new Space called “Project Blueprint” and save it as a Draft so other users don’t see it.
Create each Section: Calendar, Discussions, Documents, Tasks. In the new 2.15 release you’ll notice that the “Create New Section” action in the drop-down menu on the toolbar allows you to choose which type of section you want to create.
At this point you can adjust any other settings for the space, such as color scheme, banners, taxonomies, permissions, etc. You can even create sub-spaces within this Space and create additional Sections within those sub-spaces. You can also create sample content, such as a Welcome document, or sample discussion forum.
You can also customize any of the Section or Space landing pages using the “Customize Layout” and “Customize Page” buttons. You can place additional widgets on the landing pages, or make any other customizations needed for the Space.
Once your sample space is set up the way you like, click on the “gear icon” in the upper-right portion of the Space landing page and select the “Create Blueprint from this Space” option. You will be taken to a form where you can enter the to name your Blueprint (we’ll call it “Project Space” in this example) and also enter a short description for this Blueprint. Be sure the “Clone an existing Space” checkbox is enabled and that the “Space to clone” is set to the “Project Space” node you created above. All of this should be set by default, but it’s good to verify. When you are done, click the Save button.
Viola! You have created your first Space Blueprint!
Using a Blueprint to create a new Space
Now when you select “Create New Space” in the toolbar drop-down menu you’ll see a choice for the different Blueprints available on your site. Simply select “Create new Project Space” from the menu and you’ll be taken to a normal Create Space form where you can enter the name of your new project space. All of the other fields on this page are filled in automatically with the values from the Blueprint. Change anything you like, then click Publish when you are finished and your Blueprint will be cloned into your new space.
You will be redirected automatically to your new space landing page. Click the drop-down menu in the toolbar next to the space name and you will see the Sections for Calendar, Discussions, Documents, Tasks that you created above. You didn’t need to create these manually, they were created for you automatically by cloning the Blueprint!
That’s really all there is to it…it’s simple to use and simple to understand and will save you hours and hours of work.
Behind the Scenes
How does Open Atrium 2 accomplish this magic? The Drupal node_clone module is being used for the heavy lifting of cloning the actual content nodes, such as spaces and sections. It provides hooks such as hook_clone_access_alter() that can be used to add custom functionality for specific OA2 plugins. Beyond that, the new oa_clone submodule handles all of the OA2-specific settings, such as the space and section panelizer layout. In addition, a new hook_oa_clone_group_metadata() allows submodules to clone any additional space-specific settings, such as the space colors handled by the oa_appearance module.
In addition to cloning the nodes and settings, oa_clone creates a new “Space Blueprint” vocabulary term (previously called “Space Type”) to represent your new Blueprint. New fields have been added to this vocabulary to indicate an existing space should be cloned when creating spaces using this taxonomy term. By selecting the “Clone an existing Space” option, the panelizer layout is taken from the specified “Space to clone” along with all of it’s structure and content.
If you don’t want to clone an existing space you can still create your own Space Blueprint taxonomy term as in the past to point to a specific custom panelizer layout by selecting the “Specify layout and node types by hand” option.
By controlling the cloning process via the “Space Blueprint” taxonomy, you can easily integrate other Drupal tools for importing content into this architecture. For example, imagine you have a list of Projects to be created already saved in a *.csv spreadsheet. Using the Feeds module you can import that spreadsheet to create a new Space node for each entry. By simply specifying the Space Blueprint taxonomy term from the spreadsheet, each new space can be created as a clone of existing space structures, building out all of the child pages that are needed for each space quickly and easily.
In addition to creating a Blueprint from an existing space, you can also just clone content directly. On any content page, a “Clone this content” option is added to the “gear” icon in the upper-right corner of the page for users that have permission to clone content. You can use this to clone an entire space, sub-space, or section. You can also use it to clone specific documents, discussion posts, events, or any other content in Open Atrium. Any “child” content is also cloned, such as content with sections, or replies within a discussion topic.
The ability to create entire Space hierarchies by simply cloning a Blueprint greatly increases the day-to-day usability of Open Atrium 2. By allowing site admins to create their own Blueprints directly from existing Space examples, rather than filling out complex forms makes it even easier. Open Atrium 2 was designed to be used as a toolkit for building many different types of collaboration sites, from intranets to project management sites to web portals. Each OA2 use-case requires a different content structure and set of content-type features. Blueprints allow you to easy to design your site’s information architecture and keep it consistent across all of your Spaces.
(NOTE: The OA 2.15 release still calls the taxonomy “Space Type” rather than “Blueprint”. The vocabulary was changed to “Blueprint” in the recent -dev version.)
To get more tips and tricks for getting the most out of Open Atrium, sign up to attend this month’s Open Atrium webinar!
As a site building track organizer for DrupalCon Austin this year, Im really excited about the great submissions we’ve received so far. DrupalCon Austin is shaping up to be a fantastic event and the site building track will have some great content and insight for all levels of site builders. While DrupalCon Austin is a few months away, the session submission deadline is the end of this week (March 7th)! But never fear, you still have a week to submit your amazing site building session, so let me give you some hints about what kind of sessions we’re looking for. This year for the site building track, we are looking for creative sessions about how people build sites with Drupal, but even more than that, we are looking for three other related topics that haven’t been covered in the past.
First we want to showcase sessions that discuss how Drupal 8 will effect Drupal site building. Drupal 8 will be a central topic throughout the camp, and I think Drupal 8 discussions surrounding the site building track will be especially engaging and insightful. We want to pick from sessions that dig into the features of Drupal 8 core and how these features might help site builders bring sites to life.
Multi-Site Platform Builds
We’re seeing Drupal adopted by larger and larger enterprise organizations. With this adoption, the conversation is shifting from how to build out a Drupal site, to how to build out a multi-site Drupal platform. We are looking for sessions that highlight this new class of site building in Drupal, in which platforms are developed and used by site builders to create multiple sites from 10 to more than 100.
Content Strategy And Configuration
Finally, we are looking for sessions about integrating content strategy into Drupal site building. This last topic is really valuable, we want to find sessions that will explore how strategy effects site configuration and how conversions or limits in Drupal effect content strategy.
We’re looking for the Glue.
While all Drupal site building topics are welcome, we will be looking for sessions that speak to people that have technical skills but do not spend most of their time in code. We feel these “glue” players are a big part of the Drupal community, and we hope the sessions will not only showcase what can be done with Drupal, but how site builders are developing strategies to get it done.
We have already received tons of great sessions, but we would love to see more! You have a week so come on down and submit a session!
We grabbed Mike for a quick Q&A about his new book and experience as a published writer. Enjoy!
Give us the scoop! Tell us a little more about your book.
Sure. Responsive Theming for Drupal is a foundation for learning how to make a Drupal site behave responsively, meaning it should look good on devices of all sizes, without any trickery like theme switching. It first introduces responsive web design (RWD) in general, then runs through an example of making a simple Drupal theme responsive, then dives into working with a few popular base themes to extend them. You’ll also find some RWD gotchas and common issues and client questions, along with solutions.
It’s a book for someone who has a limited knowledge of Drupal and wants to know how to make a Drupal site responsive. By “limited knowledge of Drupal” I mean that the reader should be familiar with basic Drupal concepts like blocks and nodes but might not have much advanced site building or theming experience.
That said, there’s still a lot of meat here for people who have done more advanced theming or development and want to take a deep dive into, say, the Aurora base theme, or implementing responsive images (i.e., loading differently sized images based on device size to save bandwidth).
It’s a pretty easy read. It’s also fairly short, clocking in at 78 pages, so it’s not overwhelming or scary at all.
What made you decide to write a book?
I actually got contacted by a publishing company out of the blue. They asked if I was interested in writing a book on RWD and Drupal, and I said YES. So we started going through the initial back and forth on deciding on book length, specific topics, tone, stuff like that. After a period of that, I finally realized they wanted the book to be a cut and dry set of instructions (do this, then that, then this other thing) for each approach to making a Drupal site responsive. They didn’t want me to include any discussion on the multiple approaches you could take, like which approach is good in which situation, why one base theme might win over the others, etc. I fought against that for a while but it was a losing battle so I finally said that it wasn’t going to work out.
However, by then I had gotten myself so excited about the idea of the book that I decided to shop around at other publishers to see if anyone else would be interested and would give me a little more freedom. My first choice was O’Reilly because, well, it’s O’Reilly. From there, it was smooth sailing–a few phone calls, a book proposal, and we got the green light!
why is a mobile-first approach to rwd important?
Good question. Let’s first back up and define a couple things. We’re basically asking whether it’s better to default to the best possible user experience and then dumb things down on devices that don’t support them (i.e., “graceful degradation”) or start with the lowest common denominator and add shiny features in for devices that support them (i.e., “progressive enhancement”). The difference is subtle, but important.
In graceful degradation, you build the site specifically for users with all the top capabilities and technologies–specifically, desktop users with good, up-to-date browsers. Once that’s done, you then selectively remove highly interactive features such as <canvas> and high-performance features, such as animations for devices or outdated browsers that can’t handle them. You’ll also want to adapt the layout as the screen size gets smaller by removing extraneous sections, resizing images, stacking columns on top of each other rather than beside each other, and so on.
In progressive enhancement, you build the site first for mobile users. This means you make the site for users with touch-based devices and a small screen, so all the functionality you provide or the designs you come up with are specifically designed to look good and work well on smartphones. And then, once that’s done, you can restyle the design for larger resolutions or add in things that touch-based users would have trouble with.
It’s important to target mobile first, because mobile users are quickly becoming the most important demographic for the majority of new websites. Targeting them first ensures that they have the best experience possible.
Think about it logically. If you build a site for desktop users, everything else becomes an afterthought by definition. For example, suppose that your desktop design includes a fancy slideshow, a sweet widescreen layout, or some hover effects. When the time comes to make it work for mobile, you’ll probably just remove that stuff and replace it with the bare minimum. You might cut out the slideshow or just display all of the items at once. You might stack the widescreen layout with each section on top of the next. You might remove whatever section had the hover effects, if they’re not absolutely necessary. In the end, mobile users will probably end up with what is basically the desktop site except with stuff removed or rearranged so that it doesn’t break on mobile.
However, if you build with mobile first in mind, you’re a lot more likely to take advantage of everything mobile has to offer. Maybe you’ll be able to make use of the touchscreen to build a rich touch-based UI. Maybe you’ll use the accelerometer for some interactive feature. Maybe you are just more likely to build an awesome design that looks great on small resolutions than if you were just trimming down a desktop design. It can take many shapes and forms, but it all goes to show that the first target is important.
how did your work at phase2 help you on this project?
Working at Phase2 means I’m working with state of the art Drupal implementations for large, complex sites all day every day. I get a lot of experience dealing with real problems that real clients have and proposing and implementing solutions.
One of those common problems is obviously “How can I make my site mobile friendly?” That is the question that led to this book being written. At Phase2, we have dealt with that question in many different ways for many different clients, so we’re well versed and able to talk about the pros and cons of the different approaches from experience.
any future books in the works?
Six months ago I would have said “The world needs a good book on high performance Drupal! I want to write it!” But then High Performance Drupal happened, which filled that niche. So, for now, I’ll probably lay off the writing.
That said, Drupal 8 will bring a whole new set of challenging technical areas that need documentation, so who knows what my answer will be a year from now?
If you ask any Drupal developer about their favorite tools, the probability of Views and Panels being in that list are absolute. They are part of the top 100, and with good reason. They provide GUI tools for querying and layout customization.
They also, however, introduce complexity behind-the-scenes.
The technical debt involved with preprocessing views templates or creating custom panel layouts is hard to justify for projects that have small budgets, tight timelines, or hyper-specific design requirements.
One approach is to have a front-end developer interpret flat designs to DOM output introduced by the implementation. While this is common, it adds technical design debt as you deal with unneeded repeated elements, missing HTML5 tags, and unstructured CSS.
You also introduce QA headaches as the gap between what is designed in Photoshop and what is implemented lies solely on subjective design eye.
I believe a better approach is to use static prototypes and manipulate Drupal to the desired DOM using core API functions.
I call it Keeping Drupal Simple (KDS).
Let’s explore the KDS philosophy as well as some implementation techniques.
Every decision should be tied to a stated business goal. The ‘what if’ scenarios of users, both visitors and site administrators, being able to interact with the options Drupal can provide generally push budgets.
Although being able to think of all scenarios is something developers pride themselves on, most business goals can be solved with specific definitions of success and simpler technical solutions.
One way to focus this effort is to ask the right questions. Instead of asking ‘Do you need to be able to do x?’, follow the principles of Agile and ask ‘What business goal is solved of being able to do x?’
This subtle change allows developers to redistribute their efforts from feature rich technical implementations to high value client experiences. It also forces the client to think about the true value of a feature request as it relates to their audience.
This results in reevaluating the need for modules, like Views or Panels, because site configuration priorities are lower than the design and content creation priorities.
Below are four questions I ask when implementing KDS:
What is the business value of editors being able to change the layout / have a dashboard / administer X type of content?
How do you define a successful site visitor experience?
When do you plan for your content creators to create real content and give you feedback?
What business value is lost if X feature is moved to a Phase 2 or 3?
It goes without saying that this approach does not work for every project. Evaluating the needs of your client, the project constraints, and the abilities of your team through proper discovery is essential. KDS may work for you if:
The project has a small budget and timeline
There is a lack strong front-end acuity on the implementation team, either from availability, skill or budget
The front-end team knows [insert front-end framework], but not Drupal
Site configuration and layout will remain mostly static after launch
Not all of the above factors are required. If your project does satisfy a majority of the conditions, I would consider KDS.
One of the driving factors of cost is design interpretation. Based on a given a PSD design or wireframe set, expectations are set with the client about look, feel, and interaction.
The cost to bridge the gap between what is possible on the web and what is designed can be vast.
This can be mitigated by designing in browser and starting with HTML prototypes. Providing a simple static homepage and interior page can set the proper expectations of menu interaction, block placement, and feedback in various browsers.
Thankfully, methods like style tiles and atomic design take this even further by providing a common design relationship between elements.
This accelerates the on-boarding of new project developers because the styleguide contains both design and DOM conclusions.
The static prototype also becomes the source of truth for all decisions.
Being able to explain in concrete terms what elements of a page will and will not have user-defined configuration goes a long way to keeping deliverables within budget.
Work is also done in parallel, decreasing inefficiency.
Front-end developers communicate ideas via static prototypes to the client while also optimizing their DOM and CSS.
Backend developers develop functionality that produces the prototype DOM.
The challenge becomes bending Drupal towards that markup goal. When applying the static prototype within Drupal, the theme becomes the meeting point. Base themes tend to get in the way and add extra markup or classes, so starting from scratch is best.
The best way to build a theme from a static prototype is to take your interior page and set it as the page.tpl.php. This gives you a starting point to start swapping static components for dynamic Drupal components.
As you swap, you can compare the output of your theme to the static prototype and provide an iterative QA piece for the team.
Some best practices include:
Creating a html.tpl.php and swap out the contents in the <head> tag
Swapping the area where a menu would be for themed menu output via menu_navigation_links()
Combining the static homepage prototype elements and wrap them in a conditional that be set in template.php
Swapping sidebar areas for regions and hard-code them temporarily in template.php to check output
Using your prototype in your tpl files provides an added bonus of only needing to swap out areas if something is dynamic. This accelerates your pace to the deliverable as you’re not bogged down with parts of the DOM that will never change and, thus, do not need rendering from the Drupal ecosystem.
API Over Everything
The most important technical implementation philosophy developers need to remember when implementing KDS is to always try to use core API. The reason isn’t because 3rd-party modules are bad.
The reason is because 3rd-party modules have assumptions that can lead to technical debt. Many times, budgets are blown because of the work you have to do to undo a default assumption.
For example, if a static prototype homepage has a listing of content, the delivered DOM may use the article HTML5 tag to segment out the content from the rest of the site.
If you build that listing with Views, you’ll have to:
Build the view with the right conditions
Export the view to code
Preprocess the view or create new tpl files
Work to ignore or work-around the extra DOM included from Views
Hope the requirements of the view doesn’t change
Hope the client doesn’t change anything in the Views interface
Conversely, you can use EntityFieldQuery to grab your listing and theme the results in a custom theme function with the exact DOM desired.
You now have eliminated technical debt from the Views ecosystem and eliminated design debt due to new DOM elements.
While this isn’t a critique on the exporting UI workflow, do take time to consider the impact of multiple environments, multiple developers, multiple iterations of development, and a tight timeline.
What happens to your budget when there is a conflict?
What happens to your scrums when clients want to know what ‘that button does’?
The goal is to deliver high business value, in a limited scope. This, consequently, reduces the likelihood of scope creep introduced by additional configuration options. This also forces the team to concentrate on the business values communicated at project start.
Q: What happens if I need to update a field or setting, if not by Features?
Review the Field API and update through an update hook.
Also, Features are great! I’m not knocking Features. Maybe consider using it as a way to stub your update hook with the proper configuration array through the Feature export and piping that into your install / update hook.
Q: What about initial WYSIWYG configuration, Pathauto configuration, or some other common configuration?
See answer above.
Q: Seriously, though. Why would I do this when a module does this for me?
Code requires commits and commits provide a history. Having a history of what was changed is good. It helps team members and it helps project managers in tight deadlines and budgets. History gives you a breadcrumb trail of work.
Learning the Drupal API also gives you confidence of how things work under the hood. Troubleshooting is easier if you know the execution processes behind those buttons.
Fields can be added to block types via the form API
I haven’t had a chance to use Edit or the other recommendations in the Spark distribution. I’ll try to post an update to KDS once I’ve had a chance to evaluate it, especially since it’s in Drupal 8.
KDS is designed to strip down the implementation plan of a Drupal solution from the discovery process to the technical implementation. It’s not a one-size-fits-all nor is it a critique on popular modules like Views and Panels. It’s main goal is to provide a philosophy of simplicity inside the complex world of a Drupal project.
Please let me know what you think!
I’m curious to hear the community’s feedback and if others have found other ways to keep projects streamlined.
Being able to effectively and efficiently collaborate with colleagues is something that every organization struggles with. Whether it be the process in which collaboration occurs or the tool that facilitates the process, the vastness of the collaboration solution landscape (as seen below) highlights the extensive diversity of needs.
Members of the higher education space are no strangers to the challenges of collaboration. In any given university or college, the search for a useful and practical collaboration tool generally involves multiple invested parties, all of whom work in very different ways. Mix that with limited budget, a wash of personal preferences, the palpable disdain of having to learn a new system, and the inability to adapt fast enough to keep up with an ever-changing landscape of solutions and we’ve arrived at an overwhelming feeling of irritation and defeat.
As an open source distribution, OA2 is an excellent option for those constrained by a tight budget. Not only is the codebase free, but anyone can try it on for size before concrete decisions need to be made for an entire university or college. Spin up a version for your department or an upcoming event and see how it really works before you commit to it for your institution.
Once you’ve decided Open Atrium is the right fit for you, there are no pesky licensing fees that are typically encountered with proprietary solutions. Any money that is spent on configuring or customizing your installation of OA2 provides features and functionality that increase the value of the platform to your user base as opposed to continuing to pay for software (which you’ve already paid for). Not to mention that future extensions of your platform are based on your institution’s specific needs, as opposed to pre-set packages of features that may or may not be relevant to your users.
2. Configuration & Customization Per Relevant Party
If you’ve ever had the pleasure of working in higher education, it is likely that you’ve run into the following scenario: while the university has one unified brand, every department, school, organization, and group underneath that unified brand has an inherently unique personality and/or processes that could not possibly be shared by another department and must be expressed regardless of the impact they have on others. (Oh, the difficulties of making so many masters happy!)
The wonderful thing about OA2 is that those pieces of individuality can be implemented from both a functionality and design perspective without having to cut off the proverbial arm of your platform. Does one department have a layout and color palette that varies significantly from the rest? No problem! There are ways to change layouts, positioning of widgets, colors, etc. without needing to involve anyone on your technical team. Does “X” department have a different workflow than “Y” department? Configure them appropriately and apply them to their respective spaces. Done! Long gone are the days of hyper-generic platforms that, by trying to please everyone, please no one at all.
3. Integration with Existing Systems
Whether you’re looking to replace an existing collaboration system or implement one for the very first time, if your organization has functioned for any period of time, there are going to be legacy systems. In fact, these legacy systems might do their particular job incredibly well and you may not want to rid of them at all.
OA2 is built as a pluggable framework that allows for you to integrate with these external systems. Instead of completely reinventing the wheel, build OA2 as your base and build integrations for relevant software in order to create one cohesive system for users. If you’re curious as to how this works on the back-end, check out our Extending OA2’s Capabilities webinar.
4. Optimized for Mobile
In today’s world, users expect to be able to access their content across a variety of devices and viewports. This becomes particularly exacerbated when you are trying to interact with digitally savvy users – aka, your average college student (student-teacher portal anyone?). Keeping responsive design in mind when considering content strategy for higher education is absolutely critical.
Lucky you, OA2 is optimized for mobile and responsive ready out of the box. Built with the lightweight, customizable Bootstrap theme, 31 responsive layouts, and several responsive image styles, OA2 is capable of providing a seamless experience for your users regardless of their device.
5. Data Security & Identity Management
Considering the fact that a social collaboration portal for any higher education institution is likely to have tens of thousands of users with various levels of data privacy, it’s easy to see why a system with granular access control is essential. OA2 provides robust user management that can be automated, utilizing protocols such as LDAP or Active Directory to streamline the process of granting permissions and keeping the user database appropriately updated. In addition, users can be grouped together or managed down to the individual level when it comes to granting access controls and capabilities on the platform. That means the right content gets to the right people and is done so in an efficient manner.
Recently while working on a client project I had the need to split some code out of a subdirectory in one Git repository into its own separate repository.
In particular, we had built a Drupal site and had some custom modules we had included in the project’s repository that we now wanted to separate those modules out to become their own repositories so that other sites within the organization could use the same code.
Just copying and pasting would have been a quick way to accomplish the task, but I really wanted to be able to preserve the commit history of the code even after it was separated.
Luckily there are some pretty powerful features within Git that allow you to literally rewrite history when it comes to performing tasks like this.
In this example, let’s say the project’s repository was structured something like this:
All that we want are the contents of src/modules/my_custom_module/ in our new repository because we’re going to use Drush Make to do a git checkout to include the module. We want our final repository structure to look like this:
Git has a great command called filter-branch which allows us to pull this off pretty easily!
Here’s what I did:
First, I prepared a new repository for the module code:
Next, I went to my project’s repository where the code I wanted to separate was located and ran these commands:
# Use filter-branch's subdirectory filter to reduce the history to only
# commits that affected this directory path.
# This also takes care of rewriting the history so that
# files within this subdirectory path will now appear at the
# Push the filtered history and code to the new repository.
git push my_custom_module7.x-1.x
So at this point, we have filtered the code we wanted out of the project’s repository, made a branch of just that filtered code, and pushed it to our new module-specific repository. You can verify your code is in the module-specific repository by running a git checkout 7.x-1.x (the branch you pushed to that repository) and checking the git log.
Now, let’s say, that you have separated this code to its own repository, you want to remove every trace of it from your project’s repository. You can use git filter-branch to do that, too.
# Use git filter-branch's index-filter to rewrite our commit history and remove any commits to our now-separated module's former location.
And there we have it, the commit history of our separated module still exists in its own module-specific repository, but we’ve rewritten history in our project repository so that it doesn’t look like that module ever even existed in that code.
Some important notes:
You’re rewriting history when you’re doing these operations and that can be dangerous and messy.
It can be particularly challenging because of the distributed nature of Git – your remotes will have to be force-updated and you will have to contact anyone who might have cloned your repository to alert them to the changes. Consider it just as hard as convincing multiple real-life historians all keeping their own copies of a timeline to suddenly update their copies because you’ve revised yours – it’s tricky. Check out Chris Johnson’s blog about digging into more Git!