There is no stasis in technology, and Drupal is a prime example. Developers the world over celebrated the launch of the much anticipated Drupal 8 last November. Last month, Drupal 8.1.0 was released. The confetti has settled, and our thoughts turn to the future of the platform.
New features in Drupal 8 changed our development practices here at Zivtech, and we’re now pondering what things might look like down the road as development for Drupal 9 begins. We will adapt again. That’s not a problem for us and I hope it is not for you either. Drupal, like all things in technology, changes as the demands on it change.
During Drupaldelphia 2016, I had the opportunity to talk with a few people who are actively involved in core development. The basic sense I got was that there’s not a lot of focus on D9 yet. Most people are looking ahead at the in between releases that will iteratively improve the foundations of D8, so we should be on the lookout for exciting features in the 8.2.0 and 8.3.0 releases. That aside, we can make some educated guesses about where Drupal 9 might take us.
Drupal development tends to go where trends in web development go. We lauded the initiative to have configuration management built into core because as developers, we embrace the practices of the development community at large, automate deployments, and make site builds more stable and resilient in the face of problematic changes. Find the pains of developers, site builders, and users, and you are likely to have found the places where we will see big changes in Drupal.
One trend we’ve all seen growing over the past few years is the switch in the way users are thinking about web sites. Web applications, like Google Docs, or tools developed by companies, like Mapbox or CartoDb, have made it clear that complex interfaces and interactions can work on the web and be successful. Our users are used to opening a browser tab to use a service in the same way they might open a native application.
When I am discussing the strengths and benefits of Drupal to new people, I often talk about Drupal’s security practices and the Drupal security team. I’m sure that I am not alone in this and I know that the community as a whole takes pride in this reputation. Following standards, known best-practices, development guides, and using solid APIs are easy steps that any developer can take to harden a site against attack. But what will happen as many more people start to leverage Drupal only for the underlying platform? Much of that shielding and immunity to attack that the community has embraced over the years does not extend to a decoupled or headless Drupal, and that means exploits will likely occur.
To protect the reputation of Drupal and the community of developers standing behind it, I’d guess we’ll see the adoption of some standards, front-end libraries, and REST APIs to help ensure that Drupal can be used securely even while developing complex features.
Chances are, if you run your own servers these days, they are virtual machines running on someone else’s infrastructure. If you run on AWS, Azure, Rackspace, DigitalOcean, Linode or any of hundreds of other hosting plans, you’re probably not running dedicated hardware. All these companies give tools to create servers at the click of a button, drop servers via an API, or script the building of a complex web of interconnected machines. I was at an ops talk late last year and it was made clear to me that for many of us, uptime is not a point of pride anymore but a cause for alarm. The longer a server is running, the greater the chance it has been compromised. For many companies, deploying a new version of an application means deploying the entire stack of servers that run that system. The idea of Blue-Green deployments is fantastic and makes a lot of intuitive sense, but such deployments have had challenges in a Drupal ecosystem.
Drupal has historically had problems running across multiple databases, and running two versions of a site pointing to a single database can be seriously problematic. In Drupal 8 we now have universally unique identifiers (UUIDs) on entities. A UUID will allow us to reconcile data updates, saves, and deletions even when running across multiple database heads or many servers. UUIDs aren’t being used extensively in core right now but the work surrounding the deployment and the associated modules rely on them. I would guess that we’ll see more API functions to leverage the power of UUIDs in future versions of Drupal.
Finally, along the lines of Blue-Green deployments, I’d hope to see a strategy of intermediary database updates. One of the hardest parts about doing a Blue-Green deployment is that for a short time, both versions of the code need to be running successfully together. At the database level, this means queries need to work both before an update and after. The only way to do this is to have incremental updates that allow backwards compatibility for one release and then remove the backwards compatibility in a second release. This requires more overhead but it means deployments can happen with zero downtime. This is more of a wish than a prediction. We can develop our own code and updates with these ideas in mind, but it is likely to be too much effort and too niche to get the entire community to embrace the idea.
As trends shift and the demands of our clients and users shift, Drupal will probably shift in sync. However, Drupal has been a developer-centric CMS for a long time, so it is safe to bet that the trends in development and deployment will play a large part in shaping its future. Set a calendar reminder and check back in two years to see if any of these predictions are coming to life.
View the discussion thread.
Tell us about your project