Mandi Walls will open the Conference Programme.
Systemd is in all the major distributions nowadays and there is a lot of ways you can take advantages of it. It provides an easy way to manage your system and your services and interacts closely with the kernel features added in the last years like cgroups.
This talk will show you how to get the added value of systemd and easily do a lot of things that were complicated in the past.
Or, let’s talk about Choreography rather than Orchestration.
We’re used to using Promise theory to talk about how configuration management interacts with our servers. But can we apply the same concepts to our entire estate? What does that mean? In this talk, we’ll look into a bit of promise theory, investigate how we can apply it to applications and not just servers, and then from there we’ll talk about systems.
Ansible is a tool for managing the configuration and provisioning of Linux desktops, servers, and virtual machines. With minimal client requirements (SSH and Python) it’s easy to rollout to both existing infrastructure and new machines. You can use Ansible to manage not only core configuration such as networking. but also individual services and databases.
This talk will show you how to get Ansible up and running and use it to manage a basic Linux server with a collection of services (mail, web and a firewall). We’ll explore the pros and cons of Ansible and why you might choose it over other configuration management options. Finally, we’ll see how to combine Ansible with Git to automate the process, providing rollback support and making sure you never forget to deploy a change.
While the physical replication in PostgreSQL works well and has good performance, there are many use-cases where it is not good fit. This mainly includes cases when only partial replication is needed, when multiple databases need to be replicated to the same target, transformation of the data is needed or replication between different versions of PostgreSQL for upgrading without downtime. These use-cases can be solved by logical replication. The traditional logical replication solutions for PostgreSQL are based on triggers which results in high impact on the write performance. That’s why we developed pglogical which is based on logical decoding of the write ahead log thus having minimal impact on the performance of the source database.
In this talk I will describe the current state of the project which recently became public. I will cover use-cases that work well already, what are the limitations and the future roadmap.
OpenNMS is already known as an Open Source Enterprise Grade Service
Assurance platform for handling Fault and Performance data collection
across network and compute infrastructures. However to date, it has
relied on the industry standard RRD tool for storage and graphing of
performance data. This solution has served OpenNMS users well over the
years but has reached the limits of scalability and flexibility for
monitoring very large data centres.
In the latest release of OpenNMS we have given users the option to
replace RRD with the Cassandra NoSQL database as a back end data store.
Not only does this address the scalability and data integrity issues
inherent with RRD tool but it also provides OpenNMS users with the
opportunity to retrieve data for complex ad-hoc calculations in a way
which was difficult in the past. This positions OpenNMS to become the
best cost-effective solution for aggregating systems performance data
across very large infrastructures. It also opens the possibility of
using OpenNMS as a distributed data collector for many IoT and
engineering applications beyond it’s roots in network management.
This talk will give an overview of the benefits and features of OpenNMS
using a Cassandra cluster.
This presentation is about a tool for the deployment of virtual machine
using orchestration and configuration management to install virtual
machines on physical machine, the cloud or any other provider of bare
The tool uses existing programs such as Ansible and LibVirt to interact with the hypervisor to create the virtual machines.
A generic example will be explained, using libvirt, with KVM as the
hypervisor, to demonstrate how easy it is to deploy virtual machines
from a text file definition, which can even be generated by a machine,
giving you the ability to connect this to your monitoring or pipeline
tools. The presentation will explain how the tool works and how easy it
is to configure the tool to your specific server/datacenter/cloud. It
then demonstrates how to create such a simple example, within a few
steps and how it builds up a complete army of machines and how the
integration with monitoring and pipeline tools can be extended.
Just what is says on the tin, Lunch.
At DevOps Days Rome 2012, in November, Ulf Mansson proclaimed his new-found love for monitoring and we changed the hash tag into #monitoringlove. Based on a new era of open source tools, Ulf started loving monitoring again. And for a lot of us he was absolutely right.
Over the past 5 years an enormous amount of new tools and new patterns has come out of the community sometimes tagged with #devops, pretty much all of them opensource. Do you still know what you should be using for what? And what the differences are? An opinionated overview of the open source monitoring landscape to clear up the confusion on what you should use, or make the decision even more difficult on you
Many operations folk know the many Linux filesystems like EXT4 or XFS, they know of the schedulers available, they see the OOM killer coming and more. However, appropriate configuration is necessary when you’re running your databases at scale.
Learn best practices for Linux performance tuning for MariaDB/MySQL, PostgreSQL, MongoDB, Cassandra and HBase. Topics that will be covered include: filesystems, swap and memory management, I/O scheduler settings, using the tools available (like iostat/vmstat/etc), practical kernel configuration, profiling your database, and using RAID and LVM.
There is a focus on bare metal as well as configuring your cloud instances in Amazon EC2.
Have you ever thought that it would it be great if the OpenBSD pf packet filtering framework could be ported to GNU/Linux? Well, that isn’t likely to happen, but we do now have nftables, a drop-in replacement for iptables, which uses the kernel’s netfilter framework, and the syntax of nftables looks a lot like pf.
Patrick McHardy, of the Netfilter Core Team, first presented the idea in 2008 at the Netfilter Workshop. Development stalled until Pablo Neira Ayuso took up the reins, and now it is under active development again, having been merged into the Linux kernel mainline tree in version 3.13.
Whereas before, in order to filter packets, we were presented with a range of tools: arptables, ebtables, iptables and ip6tables, now, all these tools have been merged into nftables, making the job of packet filtering much easier. There exists an nftables wiki which is still at a fairly rudimentary stage, but Pablo has been kind enough to allow me write privileges and I’m currently working my way through it.
This presentation will be an introduction to nftables, its syntax, and how it may be used to replace iptables and the concomitant tools.
For a long time monitoring was placed at the end of the deployment and delivery chain to ensure availability in common environments. But modern infrastructures, especially highly dynamic microservices, require to place monitoring in the middle of your toolstack. Monitoring is no longer only about availability. It is about metrics, logs, bi data and last but not least about integration with the popular tools out there.With Icinga2 we changed the way how availability monitoring works and scales but we don’t want to leave it with that. Providing a full featured API makes monitoring much more dynamic and enables you to take care of volatile infrastructures. On top of that integrations with favourite log and metric systems and of course configuration management is a gist and our future focus. The presentation will introduce the Icinga2 API and show some examples to place monitoring as an important and early part of your lifecycle. In addition to that it explains how different metrics and log information can be unified at a central spot while still using and relaying on the flexibility and power of the individual tools.
The popularity of ebooks has revolutionised the publishing industry.
Now, anyone can publish a book and sell it to the world through Amazon
(other ebook marketplaces are available). Of course, actually writing
a book is still hard. And doing the marketing so that people will
actually buy your book is hard. But the process of taking some text
and turning it into an ebook has become remarkably easy. In this talk,
we’ll look at some free tools that you can use to turn your random
scribblings into an ebook.
Shell scripts are highly portable, but can end up large, ugly, and not easy to split down for maintenance. Wouldnt it be better if you could use a more advanced language, without losing the advantage of portability? How about Perl? This talk will go over how to use Perl for creating CLI tools using modern methods, and how to ‘pack’ the modules down to a single file for portability.
Time for those short 5 minute talks to get you energised before dinner!
There are several well-known heavyweight server monitoring solutions.
Whilst these are excellent in their own right, they are sometimes
overkill, if all you have is a simple need to keep a watchful eye on a
few servers. This talk will give an overview of Admonitor; what it can
be used for; and outline its future direction
I will speak in detail about how developers can shine by ensuring that not only their CV and interview technique are up to scratch but their whole professional brand and How companies can transform their brand and reputation to ensure they attract the best talent and if you are a consultant/contract how your brand awareness will help you find more work
In this talk I give an overview of the current state of Open Source DNS servers, both authoritative and recursive. I point out some specific features/strengths of each and give you hints on when to use which. I will mention the word “DNSSEC” at least once; that’s a promise.
Configuration management tools like Puppet have revolutionised the way we manage servers allowing us to manage ever larger server estates in a repeatable, consistent way. However, the advent of containerisation and microservices has lead to a trend towards deployment from “golden” images. Does this mean traditional configuration management tools like Puppet no longer have a place?”
MariaDB has made some extensions to security around the database, and here we discuss them. Learn about:
“When you’re up to your ass in alligators, it’s hard to remember you were supposed to be draining the swamp.”
Architecture has accreted, your infrastructure automation is barely extant, and production hates the living.
When you’re sprinting to find product/market fit, things always fall through the cracks.
That’s ok. We have the technology to fix this.
First, discover what you already have.
Second, derive an understanding of how it fits together.
Third, document what’s actually real and what’s incidental.
Now you can infer your systems’ and services dependencies, and start to rationalize your deployments.
This is a tale of starting from zero, a tale of adapting to reality as it is and then prioritising when, where and how to bend it to your will.
This is a tale of not giving up.
One alligator at once.
Do you love open source and want to make enough money to pay the bills? Dawn made an accidental career out of open source over 13 years ago, and it changed her life. It has given her an opportunity to work with amazing people and travel the world while doing work that is more fun than any job should be.
This session will start with why you might want to make a career out of open source. The bulk of it will explore the many ways to get open source to pay your bills. Even if you already have one of these jobs, this talk will provide options for additional career paths and tips for what to do improve your chances of getting that next gig and how to avoid sabotaging your career. Dawn will share her stories about how she ended up here along with some of her time management tips to avoid letting this work take over your entire life (unless you want it to)!
The audience is anyone who is interested in making a career out of their work with open source software or improving their existing careers. Attendees can expect to learn how to find a new career in open source software or improve an existing career. They will also get useful advice about things to do and not to do that will improve their chances of getting that next job.
This talk shows the implementation of a simple IoT control for a household boiler and how it can be done by a hobbyist developer using open and free tools. The level goes from a hardware overview up to the open and free software components and protocols used to build a workable system.
Extended Lunch on day two, to give space for birds of a feather sessions. Sign up on the board at the conference, or if you have a good idea you don’t want to forget before hand, email firstname.lastname@example.org.
The release and publish of public data on portals and other platforms is increasingly becoming a top priority among governments world over. Making public data freely available and accessible to all citizens is believed to increase transparency, accountability in public affairs, reduce corruption and more importantly empower citizens with information which they can reuse to deliver improved services, such as education, health and sanitation for all. Today, developing nations are faced with many challenges, for example rising levels of corruption just to mention a few, which has evaporated state resources in many of these countries. New levels of openness present opportunities to drastically reduce corruption, improve governance and drive economic growth.
Africa has over the years invested billions of dollars in telecommunication infrastructure to increase mobile network coverage and bandwidth. Enhanced telecommunications infrastructure has amplified access to mobile phones and Internet, which has revolutionized how people interact, engage government, do business and access services. Digital divide, however remains a challenge for digital government (e-governance) due to lack of competence, ability to access and use ICT caused by social and economic disparities. Despite these challenges, open data presents opportunities for citizens to become intermediaries of e-governance, providing solutions that keep citizens informed, hold government to account; thus increase transparency and enhance delivery of public services.