Planet Ubuntu

Subscribe to Planet Ubuntu feed
Planet Ubuntu - http://planet.ubuntu.com/
Updated: 2 hours 49 min ago

Ubuntu Insights: Using the ubuntu-app-platform content interface in app snaps

Thu, 01/26/2017 - 08:44

This is a guest post by Olivier Tilloy, Engineer at Canonical. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

Recently the ubuntu-app-platform snap has been made available in the store for application developers to build their snaps without bundling all their dependencies. The ubuntu-app-platform snap includes standard Qt libraries (version 5.6.1 as of this writing) and QML runtime, the ubuntu UI toolkit and related dependencies, and oxide (a web engine based on the chromium content API and its QML bindings).

This allows app developers to declare a dependency on this snap through the content sharing mechanism, thus reducing dramatically the size of the resulting app snaps.

I went through the exercise with the webbrowser-app snap. This proved surprisingly easy and the size of the snap (amd64 architecture) went down from 136MB to 22MB, a sizeable saving!

For those interested in the details, here are the actual changes in the snapcraft.yaml file: https://bazaar.launchpad.net/~phablet-team/webbrowser-app/staging/revision/1576

Essentially they consist in:

  • Using the ‘platform’ plug (content interface) and specifying its default provider (‘ubuntu-app-platform’)
  • Removing pretty much all stage packages
  • Adding an implicit dependency on the ’desktop-ubuntu-app-platform’ wiki part
  • Adding an empty ‘ubuntu-app-platform’ directory in the snap where snapd will bind-mount the content shared by the ubuntu-app-platform snap

Note that the resulting snap could be made even smaller. There is a known bug in snapcraft where it uses ldd to crawl the dependencies, ignoring the fact that those dependencies are already present in the ubuntu-app-platform snap.

Also note that if your app depends on any Qt module that isn’t bundled with ubuntu-app-platform, you will need to add it to the stage packages of your snap, and this is likely to bring in all the Qt dependencies, thus duplicating them. The easy fix for this situation is to override snapcraft’s default behaviour by specifying which files the part should install, using the “snap” section (see what was done for e.g. address-book-app at https://code.launchpad.net/~renatofilho/address-book-app/ubuntu-app-platform/+merge/311351).

Harald Sitter: KDE Slimbook

Thu, 01/26/2017 - 05:32

The past couple of months an elite team of KDE contributors worked on a top-secret project. Today we finally announced it to the public.

The KDE Slimbook

Together with the Spanish laptop retailer Slimbook we created our very first KDE laptop. It looks super slick and sports an ever so sexy KDE Slimbook logo on the back of the screen. It will initially come with KDE neon as operating system.

Naturally, as one of the neon developers, I was doing some software work to help this along. Last year already we switched to a more reliable graphics driver. Our installer got a face-lift to make it more visually appealing. The installer gained an actually working OEM installation mode. A hardware integration feature was added to our package pool to make sure the KDE Slimbook works perfectly out of the box.
The device looks and feels awesome. Plasma’s stellar look and feel complements it very well making for a perfect overall experience.

I am super excited and can’t wait for more people to get their hands on it, so we get closer to a world in which everyone has control over their digital life and enjoys freedom and privacy, thanks to KDE.

Stephan Adig: Dear OpenStack Foundation

Thu, 01/26/2017 - 02:39

why do I need to be an OpenStack Foundation Member when I want to send you a bugfix via PR on GitHub?

I don't wanna work on OpenStack per se, I just want to use one of your little utils from your stack and it doesn't work as expected under a newer version of Python :)

It would be nice, if the barrier to contribute could be lowered.

Meerkat: The Perfect C Array Library

Thu, 01/26/2017 - 01:38

I love C. And I loathe C++.

But there’s one thing I like about C++: The fact that I don’t have to write my own dynamic array libraries each time I try to start a project.

Of course, there are many libraries that exist for working with arrays in C. Glib, Eina, DynArray, etc. But I wanted something as easy to use as C++’s std::vector, with the performance and memory usage of std::vector.

By the way, I am not talking about algorithmic performance. I’m writing this assuming the algorithms are identical (i.e. I’m writing purely about implementation differences).

There is a few problems with the performance and memory usage of the aforementioned libraries, the major one being that the element size is stored as a structure member. Which means an extra 4-8 bytes per array, and constantly having to read a variable (which means many missed optimization opportunities). While this may not sound too bad (and in the grand scheme of things, probably isn’t), it is undeniably less efficient than C++.

This isn’t the only problem, there are other missed optimization opportunities in the function (vs macro)-based variants, for example, calling functions for tiny operations, calling memcpy for types that fit within registers, etc.

All of this might seem like splitting hairs, and it probably is. But knowing that C++ can be faster, more memory efficient, and less bothersome to code in than C is not a thought I like very much. So I wanted to try to level the playing field.

It took me a rather long amount of sporadic work for me to create my very own “Perfect C Array Library”, that, I thought, fulfilled my requirements.

First, let’s look at some example code using it:

array(int) myarray = array_init(); array_push(myarray, 5); array_push(myarray, 6); for (int i = 0; i < myarray.length; i++) { printf("%i\n", myarray.data[i]); } array_free(myarray);

Alright, it might be a tiny bit less pretty than C++. But hey, this is good enough for me.

In terms of performance and memory issues, I fixed the issues I wrote above. So in theory, it should be just as fast as C++, right?

Turns out I missed one issue. Cache Misses. In my mind, if everything was written as a macro, it would, in theory, be faster than functions. I was wrong. Large portions of code inlined can result in cache misses, which will quite negatively impact the performance of the function.

So, as far as I can see, it is impossible to write a set of array functions for C that will be as fast and easy to use as C++’s std::vector. But please correct me if I’m wrong!

With that being said, this implementation is the most efficient I’ve been able to write so far, so let me show you the idea behind it:

#define array(type) \ struct { \ type* data; \ size_t length; \ } #define array_init() \ { \ .data = NULL; \ .length = 0; \ } #define array_free(array) \ do { \ free(array.data); \ array.data = NULL; \ array.length = 0; \ } while (0) #define array_push(array, element) \ do { \ array.data = realloc(array.data, \ sizeof(*array.data) * \ (array.length + 1)); \ array.data[array.length] = element; \ array.length++; \ } while (0)

The magic is in sizeof(*array.data). For some reason I never knew this was legal in C, but it does exactly what it says it does: it returns the size of type. Which eliminates the need to store this in the struct.

The code above is oversimplified to demonstrate the idea. It’s very incomplete, algorithmically slow, and unsafe. But the idea is there.

To summarize, I am not aware of any way to write a completely zero-compromise array library in C, but the code above shows the closest I’ve come to that.

 

P.S. There is one problem I am aware of with this method:

array(int) myarray; array(int) myarray1 = myarray; /* 'error: invalid initializer' */

There are 2 ways to get around this:

memcpy(&myarray1, &myarray, sizeof(myarray)); /* or */ myarray1 = *((typeof(myarray1)*)&myarray); /* requires GNU C */

Both of which should, under a decent optimization level, result in the same assembly.


Timo Aaltonen: Mesa 12.0.6 in 16.04 & 16.10

Wed, 01/25/2017 - 22:13

Previous LTS point-releases came with a renamed Mesa backported from the latest release (as in mesa-lts-wily for instance) . Among other issues this prevented providing newer Mesa backports for point-release users without getting a mess of different versions. 

That’s why from 16.04.2 onwards Mesa will be backported unrenamed, and this time it is the last version of the 12.0.x series which was also used in 16.10. It’s available now in xenial-proposed, and of course in yakkety-proposed too (16.10 released with 12.0.3). Get it while hot! 


Stuart Langridge: We all sorta thought

Wed, 01/25/2017 - 17:08

A thing I wrote today, about Trump and Brexit and “post-truth” and “alternative facts” and helplessness, because I’ve had this conversation separately three times today.

this is the thing. We all sorta thought (and by “we” I mean everyone from us here right back to, I dunno, Newton and Boyle) that if we provided inductive or deductive proof of a thing, that everyone else would say “oh yeah, I’m convinced now!” and that’d be it. But people who don’t want that to happen have learned that attacking the evidence doesn’t work — it took them a few hundred years to learn that, but they did — but dismissing the whole idea as illegitimate does work. And we don’t know how to argue against that. I say two and two are four; you disagree; I say “no look here’s the proof”; you say “your methods of proof are wrong and biased”; and then I’m all, er, I don’t know what to say now, you were meant to be convinced by the proof.

more importantly: a third party, looking at that conversation, goes away thinking “well, is 2+2 equal to 4? Don’t know; there seem to be two sides to that argument”, or worse, “man, I just don’t care what 2+2 is because every time I try to find out there’s just loads of shouting, so I’ll stop asking”.

and thus, modern politics. Gaslighting and obfuscation, designed to make people believe that facts are disputable and that engagement is confusing and annoying.

(Of course, part of the problem here is that our side have a habit of declaring things to be an actual fact when they’re really “what we want to believe”, and once one’s cried wolf that way a few times, one’s credibility is gone and it’s really hard to get back. It’s not all the other side’s fault.)

Normally I wouldn’t re-post such a thing, but of course this conversation happened on Slack, which means that six months from now I won’t be able to link to this because it’ll be over 10,000 messages ago and Slack will be holding it to ransom until we pay money, and five years from now I won’t be able to link to it because Slack will have gone bust or have been sold to someone and shut down.

Xubuntu: Winners of the #lovexubuntu Competition!

Wed, 01/25/2017 - 13:53

As Xubuntu’s tenth anniversary year is now over, it’s time to announce the winners of the #lovexubuntu competition announced in June!

The two grand prize winners, receiving a t-shirt and a sticker set, are Keith I Myers with his Xubuntu cookie cutters and Daniel Eriksson with a story of a happy customer. The three other finalists, each one receiving a set of Xubuntu stickers are Dina, Sabrin Islam and Michael Morozov.

Congratulations to all winners!

Finally, before presenting the winning submissions, let us thank everybody who submitted a story or a picture – we really appreciate it! For those who want to see more, all of the submissions are listed on the Xubuntu wiki on the Love Xubuntu 2016 page.

The Grand Prize Winners Keith I Myers Xubuntu cookie cutters by Keith I Myers

After seeing a simple metal cookie cutter created by the Xubuntu Marketing lead, Keith was inspired to make a plastic 3D-printed version of the Xubuntu cookie cutter. He printed several of them and also shared the design on Thingiverse so others could also print it.

If you decide to print and use these, we’d love to see the resulting cookies!

Daniel Eriksson

We run a small business, mainly doing computer service and maintenance, app programming and other similar things. One of the things we do are customized Linux desktops, where we build a user interface based around a customers wishes; tweaking everything from themes, colors and fonts to panels, widgets and other content. When we started doing this we tried out and evaluated loads of distributions and desktop environments, eventually deciding that Xubuntu was the perfect choice. We wanted to maximize the amount of customization we could do while still having a system that was light on resources (since customers often have old computers.)

It was a choice we have never regretted, as it has always fit our needs perfectly. We can get everything from design to workflow just as we want it, and it is stable as rock while still often introducing new features for us to play with.

One of our best experiences was with a person who wanted an interface on a laptop that was just as simple and scaled down as that of an iPad, while still being able to do all things a computer ought to do. This was not an especially computer-savvy person, so it needed to be straightforward and simple. We managed to discard most classic desktop parameters and build a very unique interface, all within what was provided by stock Xubuntu. (Though we did some art ourselves.) It turned out great, our customer was very happy with it and other people have shown interest in having something similar on their computers. Needless to say, this was a success story for us which had not been possible without Xubuntu.

So thanks for all your hard work! We keep on designing our users desktops and will continue to use the excellent Xubuntu for it. :)

Finalists Dina

I live in Israel, and in Hebrew, the slang word “Zubi” is an insolent and extreme way to say “No way I’ll do it”.

Also, according to the Hebrew Wikipedia, Xubuntu is pronounced as “Zoo-boon-too” rather than “Ksoo-boon-too” (its name is written in Hebrew, which solves that ambiguity).

Therefore, when I told a friend that my old computer would not boot because of a hard disk problem, and all the technicians advised me to buy a new one, but I installed Xubuntu and it works, he noted that “Xubuntu” actually sounds like “I’m not doing that, I’m moving to Linux!”

Sabrin Islam

@Xubuntu A teacher once asked me, “how did you get Windows to look like that”, to which I replied it’s Xubuntu sir #LoveXubuntu

– @Ornim on Twitter
Original tweet Michael Morozov

I #LoveXubuntu because it’s top-notch, minimalistic neat and helps me focus on real things.

– @m1xo_0n on Twitter
Original tweet Beyond Year 10

As we look forward to 2017 and the 11th year of Xubuntu, keep an eye out for other ways you can help celebrate and promote Xubuntu. And as always, we could use more folks contributing directly to the development, testing and release of Xubuntu, see the Xubuntu Contributor Documentation to learn more.

Ubuntu Insights: Canonical Distribution of Kubernetes – Release 1.5.2

Tue, 01/24/2017 - 08:28

We’re proud to announce support for Kubernetes 1.5.2 in the Canonical Distribution of Kubernetes. This is a pure upstream distribution of Kubernetes, designed to be easily deployable to public clouds, on-premise (ie vsphere, openstack), bare metal, and developer laptops. Kubernetes 1.5.2 is a patch release comprised of mostly bugfixes, and we encourage you to check out the release notes.

Getting Started:

Here’s the simplest way to get a Kubernetes 1.5.2 cluster up and running on an Ubuntu 16.04 system:

sudo apt-add-repository ppa:juju/stable sudo apt-add-repository ppa:conjure-up/next</span> sudo apt update sudo apt install conjure-up conjure-up kubernetes

During the installation conjure-up will ask you what cloud you want to deploy on and prompt you for the proper credentials. If you’re deploying to local containers (LXD) see these instructions for localhost-specific considerations.

For production grade deployments and cluster lifecycle management it is recommended to read the full Canonical Distribution of Kubernetes documentation.

Home page: https://jujucharms.com/canonical-kubernetes/

Source code: https://github.com/juju-solutions/bundle-canonical-kubernetes

How to upgrade

With your kubernetes model selected, you can deploy the bundle to upgrade your cluster if on the 1.5.x series of kubernetes. At this time releases before 1.5.x have not been tested. Depending on which bundle you have previously deployed, run:

juju deploy canonical-kubernetes

or

juju deploy kubernetes-core

If you have made tweaks to your deployment bundle, such as deploying additional worker nodes as a different label, you will need to manually upgrade the components. The following command list assumes you have made no tweaks, but can be modified to work for your deployment.

juju upgrade-charm kubernetes-master juju upgrade-charm kubernetes-worker juju upgrade-charm etcd juju upgrade-charm flannel juju upgrade-charm easyrsa juju upgrade-charm kubeapi-load-balancer

This will upgrade the charm code, and the resources to kubernetes 1.5.2 release of the Canonical Distribution of Kubernetes.

New features:
  • Full support for Kubernetes v1.5.2.
General Fixes
  • #151 #187 It wasn’t very transparent to users that they should be using conjure-up when locally developing, conjure-up is now the defacto default mechanism for deploying CDK.

  • #173 Resolved permissions on ~/.kube on kubernetes-worker units

  • #169 Tuned the verbosity of the AddonTacticManager class during charm layer build process

  • #162 Added NO_PROXY configuration to prevent routing all requests through configured proxy [by @axinojolais]

  • #160 Resolved an error by flannel sometimes encountered during cni-relation-changed [by @spikebike]

  • #172 Resolved sporadic timeout issues between worker and apiserver due to nginx connection buffering [by @axinojolais]

  • #101 Work-around for offline installs attempting to contact pypi to install docker-compose

  • #95 Tuned verbosity of copy operations in the debug script for debugging the debug script.

Etcd layer-specific changes
  • #72 #70 Resolved a certificate-relation error where etcdctl would attempt to contact the cluster master before services were ready [by @javacruft]
Unfiled/un-scheduled fixes:
  • #190 Removal of assembled bundles from the repository. See bundle author/contributors notice below
Additional Feature(s):
  • We’ve open sourced our release management process scripts we’re using in a juju deployed jenkins model. These scripts contain the logic we’ve been running by hand, and give users a clear view into how we build, package, test, and release the CDK. You can see these scripts in the juju-solutions/kubernetes-jenkins repository. This is early work, and will continue to be iterated on / documented as we push towards the Kubernetes 1.6 release.
Notice to bundle authors and contributors:

The fix for #190 is a larger change that has landed in the bundle-canonical-kubernetes repository. Instead of maintaining several copies across several repositories of a single use-case bundle; we are now assembling the CDK based bundles as fragments (un-official nomenclature).

This affords us the freedom to rapidly iterate on a CDK based bundle and include partner technologies, such as different SDN vendors, Storage backend components, and other integration points. Keeping our CDK bundle succinct, and allowing the more complex solutions to be assembled easily, reliably, and repeatedly. This does change the contribution guidelines for end users.

Any changes to the core bundle should be placed in its respective fragment under the fragments directory. Once this has been placed/merged, the primary published bundles can be assembled by running ./bundle in the root of the repository. This process has been outlined in the repository README.md

We look forward to any feedback on how opaque/transparent this process is, and if it has any useful applications outside of our own release management process. The ./bundle python script is still very much geared towards our own release process, and how to assemble bundles targeted for the CDK. However we’re open to generalizing them and encourage feedback/contributions to make this more useful to more people.

How to contact us:

We’re normally found in these Slack channels and attend these sig meetings regularly:

Operators are an important part of Kubernetes, we encourage you to participate with other members of the Kubernetes community!

We also monitor the Kubernetes mailing lists and other community channels, feel free to reach out to us. As always, PRs, recommendations, and bug reports are welcome: https://github.com/juju-solutions/bundle-canonical-kubernetes

Jono Bacon: Endless Code and Mission Hardware Demo

Tue, 01/24/2017 - 05:35

Recently, I have had the pleasure of working with a fantastic company called Endless who are building a range of computers and a Linux-based operating system called Endless OS.

My work with them has primarily been involved in the community and product development of an initiative in which they are integrating functionality into the operating system that teaches you how to code. This provides a powerful platform where you can learn to code and easily hack on applications in the platform.

If this sounds interesting to you, I created a short video demo where I show off their Mission hardware as well as run through a demo of Endless Code in action. You can see it below:

I would love to hear what you think and how Endless Code can be improved in the comments below.

The post Endless Code and Mission Hardware Demo appeared first on Jono Bacon.

The Fridge: Ubuntu Weekly Newsletter Issue 495

Mon, 01/23/2017 - 19:24

Welcome to the Ubuntu Weekly Newsletter. This is issue #495 for the weeks January 9 – 22, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Guiver
  • Paul White
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Ubuntu Insights: What IT Pros Need to Know about Server Provisioning

Mon, 01/23/2017 - 06:30

Big Software, IoT and Big Data are changing how organisations are architecting, deploying, and managing their infrastructure. Traditional models are being challenged and replaced by software solutions that are deployed across many environments and many servers. However, no matter what infrastructure you have, there are bare metal servers under it, somewhere.

Organisations are looking for more efficient ways to balance their hardware and infrastructure investments with the efficiencies of the cloud. Canonical’s MAAS (Metal As A Service) is such a technology. MAAS is designed for devops at scale, in places where bare metal is the best way to run your applications. Big data, private cloud, PAAS and HPC all thrive on MAAS. Hardware has always been an expensive and difficult resource to deploy within a data centre, but is unfortunately still a major consideration for any organisation moving all or part of their infrastructure to the cloud. To become more cost-effective, many organisations hire teams of developers to cobble together software solutions that solve functional business challenges while leveraging existing legacy hardware in the hopes of offsetting the need to buy and deploy more hardware-based solutions.

MAAS isn’t a new concept, but demand and adoption rates are growing because many enterprises want to combine the flexibility of cloud services with the raw power of bare metal servers to run high-power, scalable workloads. For example, when a new server needs to be deployed, MAAS automates most, if not all, of the provisioning process. Automation makes deploying solutions much quicker and more efficient because it allows tedious tasks to be performed faster and more accurately without human intervention. Even with proper and thorough documentation, manually deploying server to run web services or Hadoop, for example, could take hours compared to a few minutes with MAAS.

Forward thinking companies are leveraging server provisioning to combine the flexibility of the cloud with the power and security of hardware. For example:

  • High Performance Computing organisations are using MAAS to modernise how they deploy and allocate servers quickly and efficiently.
  • Smart Data centers are using MAAS to enable multi purpose their server usage to improve efficiency and ensure servers do not go underutilised.
  • Hybrid cloud providers leverage MAAS to provide extra server support during peak demand times and between various public cloud providers

This ebook: Server Provisioning: What Network Admins & IT Pros Need to Know outlines how innovative companies are leveraging MAAS to get more out of their hardware investment while making their cloud environments more efficient and reliable. Smart IT pros know that going to the cloud does not mean having to rip and replace their entire infrastructure to take advantage of the opportunities the cloud offers. Canonical’s MAAS is a mature solution to help organisations to take full advantage of their cloud and legacy hardware investments.

Get started with MAAS

To download and install MAAS for free please visit ubuntu.com/download/server-provisioning or to talk to one of our scale-out experts about deploying MAAS in your datacenter contact us. For more information please download our free eBook on MAAS.

Download eBook

Jorge Castro: Canonical Distribution of Kubernetes - Release 1.5.2

Mon, 01/23/2017 - 01:30

We’re proud to announce support for Kubernetes 1.5.2 in the Canonical Distribution of Kubernetes. This is a pure upstream distribution of Kubernetes, designed to be easily deployable to public clouds, on-premise (ie vsphere, openstack), bare metal, and developer laptops. Kubernetes 1.5.2 is a patch release comprised of mostly bugfixes, and we encourage you to check out the release notes.

Getting Started:

Here’s the simplest way to get a Kubernetes 1.5.2 cluster up and running on an Ubuntu 16.04 system:

sudo apt-add-repository ppa:juju/stable sudo apt-add-repository ppa:conjure-up/next sudo apt update sudo apt install conjure-up conjure-up kubernetes

During the installation conjure-up will ask you what cloud you want to deploy on and prompt you for the proper credentials. If you’re deploying to local containers (LXD) see these instructions for localhost-specific considerations.

For production grade deployments and cluster lifecycle management it is recommended to read the full Canonical Distribution of Kubernetes documentation.

Home page: https://jujucharms.com/canonical-kubernetes/

Source code: https://github.com/juju-solutions/bundle-canonical-kubernetes

How to upgrade

With your kubernetes model selected, you can deploy the bundle to upgrade your cluster if on the 1.5.x series of kubernetes. At this time releases before 1.5.x have not been tested. Depending on which bundle you have previously deployed, run:

    juju deploy canonical-kubernetes

or

    juju deploy kubernetes-core

If you have made tweaks to your deployment bundle, such as deploying additional worker nodes as a different label, you will need to manually upgrade the components. The following command list assumes you have made no tweaks, but can be modified to work for your deployment.

juju upgrade-charm kubernetes-master juju upgrade-charm kubernetes-worker juju upgrade-charm etcd juju upgrade-charm flannel juju upgrade-charm easyrsa juju upgrade-charm kubeapi-load-balancer

This will upgrade the charm code, and the resources to kubernetes 1.5.2 release of the Canonical Distribution of Kubernetes.

New features:
  • Full support for Kubernetes v1.5.2.
General Fixes
  • #151 #187 It wasn’t very transparent to users that they should be using conjure-up when locally developing, conjure-up is now the defacto default mechanism for deploying CDK.

  • #173 Resolved permissions on ~/.kube on kubernetes-worker units

  • #169 Tuned the verbosity of the AddonTacticManager class during charm layer build process

  • #162 Added NO_PROXY configuration to prevent routing all requests through configured proxy [by @axinojolais]

  • #160 Resolved an error by flannel sometimes encountered during cni-relation-changed [by @spikebike]

  • #172 Resolved sporadic timeout issues between worker and apiserver due to nginx connection buffering [by @axinojolais]

  • #101 Work-around for offline installs attempting to contact pypi to install docker-compose

  • #95 Tuned verbosity of copy operations in the debug script for debugging the debug script.

Etcd layer-specific changes
  • #72 #70 Resolved a certificate-relation error where etcdctl would attempt to contact the cluster master before services were ready [by @javacruft]
Unfiled/un-scheduled fixes:
  • #190 Removal of assembled bundles from the repository. See bundle author/contributors notice below
Additional Feature(s):
  • We’ve open sourced our release management process scripts we’re using in a juju deployed jenkins model. These scripts contain the logic we’ve been running by hand, and give users a clear view into how we build, package, test, and release the CDK. You can see these scripts in the juju-solutions/kubernetes-jenkins repository. This is early work, and will continue to be iterated on / documented as we push towards the Kubernetes 1.6 release.
Notice to bundle authors and contributors:

The fix for #190 is a larger change that has landed in the bundle-canonical-kubernetes repository. Instead of maintaining several copies across several repositories of a single use-case bundle; we are now assembling the CDK based bundles as fragments (un-official nomenclature).

This affords us the freedom to rapidly iterate on a CDK based bundle and include partner technologies, such as different SDN vendors, Storage backend components, and other integration points. Keeping our CDK bundle succinct, and allowing the more complex solutions to be assembled easily, reliably, and repeatedly. This does change the contribution guidelines for end users.

Any changes to the core bundle should be placed in its respective fragment under the fragments directory. Once this has been placed/merged, the primary published bundles can be assembled by running ./bundle in the root of the repository. This process has been outlined in the repository README.md

We look forward to any feedback on how opaque/transparent this process is, and if it has any useful applications outside of our own release management process. The ./bundle python script is still very much geared towards our own release process, and how to assemble bundles targeted for the CDK. However we’re open to generalizing them and encourage feedback/contributions to make this more useful to more people.

How to contact us:

We’re normally found in these Slack channels and attend these sig meetings regularly:

Operators are an important part of Kubernetes, we encourage you to participate with other members of the Kubernetes community!

We also monitor the Kubernetes mailing lists and other community channels, feel free to reach out to us. As always, PRs, recommendations, and bug reports are welcome: https://github.com/juju-solutions/bundle-canonical-kubernetes

Jonathan Riddell: Reports of KDE neon Downloads Being Dangerous Entirely Exaggerated

Fri, 01/20/2017 - 17:18

When you download a KDE neon ISO you get transparently redirected to one of the mirrors that KDE uses. Recently the Polish mirror was marked as unsafe in Google Safebrowsing which is an extremely popular service used by most web browsers and anti-virus software to check if a site is problematic. I expect there was a problem elsewhere on this mirror but it certainly wasn’t KDE neon. KDE sysadmins have tried to contact the mirror and Google.

You can verify any KDE neon installable image by checking the gpg signature against the KDE neon ISO Signing Key.  This is the .sig file which is alongside all the .iso files.

gpg2 --recv-key '348C 8651 2066 33FD 983A 8FC4 DEAC EA00 075E 1D76' wget http://files.kde.org/neon/images/neon-useredition/current/neon-useredition-current.iso.sig gpg2 --verify neon-useredition-current.iso.sig gpg: Signature made Thu 19 Jan 2017 11:18:13 GMT using RSA key ID 075E1D76 gpg: Good signature from "KDE neon ISO Signing Key <neon@kde.org>" [full]

Adding a sensible GUI to do this is future work and fairly tricky to do in a secure way but hopefully soon.

by

Jonathan Riddell: KDE neon Inaugurated with Calamares Installer

Fri, 01/20/2017 - 11:23

You voted for change and today we’re bringing change. Today we give back the installer to the people. Today Calamares 3 was released.

It’s been a long standing wish of KDE neon to switch to the Calamares installer.  Calamares is a distro independent installer used by various projects such as Netrunner and Tanglu.  It’s written in Qt and KDE Frameworks and has modules in C++ or Python.

Today I’ve switched the Developer Unstable edition to Calamares and it looks to work pretty nicely.

However there’s a few features missing compared to the previous Ubiquity installer.  OEM mode might be in there but needs me to add some integration for it.  Restricted codecs install should be easy to add.  LUKS encrypted hard disk are there but also needs some integration from me.  Encrypted home holders isn’t there and should be added.  Updating to latest packages on install should also be added.  It does seem to work with UEFI computers, but not with secure boot yet. Let me know if you spot any others.

I’ve only tested this on a simple virtual machine, so give it a try and see what breaks. Or if you want to switch back run apt install ubiquity-frontend-kde ubiquity-slideshow-neon''.









by

Ubuntu Insights: tutorials.ubuntu.com goes live!

Fri, 01/20/2017 - 06:49

We are really proud to announce that Tutorials Ubuntu went live this week!

What are ubuntu tutorials?
Ubuntu tutorials are a topic-specific walkthroughs, giving you a very practical experience on a particular domain. They are just like learning from pair programming except you can do it on your own! They provide a step-by-step process to doing development and devops activities on Ubuntu machines, servers or devices.

Each tutorial has:

  • A clear and detailed summary of what you will learn in this tutorial
  • The content difficulty level: you will know where to start from!
  • An estimated completion time for each step and the whole tutorial, so that you plan precisely depending on your availability.
  • A “where to go from there” final step, guiding you to the next logical places to get more information about that particular subject, or the next tutorial you can follow now that you have learned those notions.

For now, the tutorials focus mainly on building and using snaps and Ubuntu Core. If you’d like to see tutorials cover more topics, or if you’re interested in contributing tutorials, let us know.

A snap for all tutorials!
And that’s not all! You can as well work offline if you desire and always take your tutorials with you! Using the snap technology, we built a tutorial snap including the same content and the same technology as the one you can find on the website (that’s the beauty of snaps!)

To get access to it, on any snap system like Ubuntu desktop 16.04 LTS, just type:

$ snap install snap-codelabs

Open your browser at http://localhost:8123/ and enjoy!

Note that its name and design will soon change to align more with tutorials.ubuntu.com.

You can contribute too!

If you plan to help us contributing and creating a new ubuntu tutorial, it’s pretty simple! The backend is based on a simple google doc with a straightforward syntax. If you’d like to write your own tutorial here are some Guidelines you can follow that will help you with the tone of voice, content and much more. Let us know what you’re done!

You will note that we based our content on Google Codelab framework that they have open sourced. A big up to them!

We hope you’ll like playing and learning those new concepts in a fun and interactive way! See you soon during your next tutorial.

Harald Sitter: Snapping DBus

Fri, 01/20/2017 - 06:47

For the past couple of months I’ve been working on getting KDE applications into the binary bundle format snap.

With the release of snapd 2.20 last month it gained a much-needed feature to enable easy bundling of applications that register a DBus service name. The all new dbus interface makes this super easy.

Being able to easily register a DBus service matters a great deal because an extraordinary amount of KDE’s applications are doing just that. The use cases range from actual inter-process communication to spin-offs from this functionality, such as single-instance behavior and clean application termination via the kquitapp command-line utility.

There’s barely any application that gets by without also claiming its own space on the session bus, so it is a good thing that enabling this is now super easy when building snap bundles.

One simply adds a suitable slot to the snapcraft.yaml and that’s it:

slots: session-dbus-interface: interface: dbus name: org.kde.kmplot bus: session

An obvious caveat is that the application needs to claim a well-known name on the bus. For most of KDE’s applications this will happen automatically as the KDBusAddons framework will claim the correct name assuming the QCoreApplication properties were set with the relevant data to deduce the organization+app reverse-domain-name.

As an additional bonus, in KDE we tend to codify the used service name in the desktop files via the X-DBUS-ServiceName entry already. When writing a snapcraft.yaml it is easy to figure out if DBus should be used and what the service name is by simply checking the desktop file.

The introduction of this feature moves a really big roadblock out of the way for enabling KDE’s applications to be easily snapped and published.

Daniel Pocock: Which movie most accurately forecasts the Trump presidency?

Thu, 01/19/2017 - 12:31

Many people have been scratching their heads wondering what the new US president will really do and what he really stands for. His alternating positions on abortion, for example, suggest he may simply be telling people what he thinks is most likely to win public support from one day to the next. Will he really waste billions of dollars building a wall? Will Muslims really be banned from the US?

As it turns out, several movies provide a thought-provoking insight into what could eventuate. What's more, these two have a creepy resemblance to the Trump phenomenon and many of the problems in the world today.

Countdown to Looking Glass

On the classic cold war theme of nuclear annihilation, Countdown to Looking Glass is probably far more scary to watch on Trump eve than in the era when it was made. Released in 1984, the movie follows a series of international crises that have all come to pass: the assassination of a US ambassador in the middle east, a banking crisis and two superpowers in an escalating conflict over territory. The movie even picked a young Republican congressman for a cameo role: he subsequently went on to become speaker of the house. To relate it to modern times, you may need to imagine it is China, not Russia, who is the adversary but then you probably won't be able to sleep after watching it.

The Omen

Another classic is The Omen. The star of this series of four horror movies, Damien Thorn, appears to have a history that is eerily reminiscent of Trump: born into a wealthy family, a series of disasters befall every honest person he comes into contact with, he comes to control a vast business empire acquired by inheritance and as he enters the world of politics in the third movie of the series, there is a scene in the Oval Office where he is flippantly advised that he shouldn't lose any sleep over any conflict of interest arising from his business holdings. Did you notice Damien Thorn and Donald Trump even share the same initials, DT?

Nathan Haines: UbuCon Summit at SCALE 15x Call for Papers

Thu, 01/19/2017 - 03:12

UbuCons are a remarkable achievement from the Ubuntu community: a network of conferences across the globe, organized by volunteers passionate about Open Source and about collaborating, contributing, and socializing around Ubuntu. UbuCon Summit at SCALE 15x is the next in the impressive series of conferences.

UbuCon Summit at SCALE 15x takes place in Pasadena, California on March 2nd and 3rd during the first two days of SCALE 15x. Ubuntu will also have a booth at SCALE's expo floor from March 3rd through 5th.

We are putting together the conference schedule and are announcing a call for papers. While we have some amazing speakers and an always-vibrant unconference schedule planned, it is the community, as always, who make UbuCon what it is—just as the community sets Ubuntu apart.

Interested speakers who have Ubuntu-related topics can submit their talk to the SCALE call for papers site. UbuCon Summit has a wide range of both developers and enthusiasts, so any interesting topic is welcome, no matter how casual or technical. The SCALE CFP form is available here:

http://www.socallinuxexpo.org/scale/15x/cfp

Over the next few weeks we’ll be sharing more details about the Summit, revamping the global UbuCon site and updating the SCALE schedule with all relevant information.

http://www.ubucon.org/

About SCaLE:

SCALE 15x, the 15th Annual Southern California Linux Expo, is the largest community-run Linux/FOSS showcase event in North America. It will be held from March 2-5 at the Pasadena Convention Center in Pasadena, California. For more information on the expo, visit https://www.socallinuxexpo.org

St&eacute;phane Graber: LXD on Debian (using snapd)

Wed, 01/18/2017 - 15:19

Introduction

So far all my blog posts about LXD have been assuming an Ubuntu host with LXD installed from packages, as a snap or from source.

But LXD is perfectly happy to run on any Linux distribution which has the LXC library available (version 2.0.0 or higher), a recent kernel (3.13 or higher) and some standard system utilities available (rsync, dnsmasq, netcat, various filesystem tools, …).

In fact, you can find packages in the following Linux distributions (let me know if I missed one):

We have also had several reports of LXD being used on Centos and Fedora, where users built it from source using the distribution’s liblxc (or in the case of Centos, from an external repository).

One distribution we’ve seen a lot of requests for is Debian. A native Debian package has been in the works for a while now and the list of missing dependencies has been shrinking quite a lot lately.

But there is an easy alternative that will get you a working LXD on Debian today!
Use the same LXD snap package as I mentioned in a previous post, but on Debian!

Requirements
  • A Debian “testing” (stretch) system
  • The stock Debian kernel without apparmor support
  • If you want to use ZFS with LXD, then the “contrib” repository must be enabled and the “zfsutils-linux” package installed on the system
Installing snapd and LXD

Getting the latest stable LXD onto an up to date Debian testing system is just a matter of running:

apt install snapd snap install lxd

If you never used snapd before, you’ll have to either logout and log back in to update your PATH, or just update your existing one with:

. /etc/profile.d/apps-bin-path.sh

And now it’s time to configure LXD with:

root@debian:~# lxd init Name of the storage backend to use (dir or zfs) [default=dir]: Create a new ZFS pool (yes/no) [default=yes]? Name of the new ZFS pool [default=lxd]: Would you like to use an existing block device (yes/no) [default=no]? Size in GB of the new loop device (1GB minimum) [default=15]: Would you like LXD to be available over the network (yes/no) [default=no]? Would you like stale cached images to be updated automatically (yes/no) [default=yes]? Would you like to create a new network bridge (yes/no) [default=yes]? What should the new bridge be called [default=lxdbr0]? What IPv4 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]? What IPv6 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]? LXD has been successfully configured.

And finally, you can start using LXD:

root@debian:~# lxc launch images:debian/stretch debian Creating debian Starting debian root@debian:~# lxc launch ubuntu:16.04 ubuntu Creating ubuntu Starting ubuntu root@debian:~# lxc launch images:centos/7 centos Creating centos Starting centos root@debian:~# lxc launch images:archlinux archlinux Creating archlinux Starting archlinux root@debian:~# lxc launch images:gentoo gentoo Creating gentoo Starting gentoo

And enjoy your fresh collection of Linux distributions:

root@debian:~# lxc list +-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+ | archlinux | RUNNING | 10.250.240.103 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe40:7b1b (eth0) | PERSISTENT | 0 | +-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+ | centos | RUNNING | 10.250.240.109 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe87:64ff (eth0) | PERSISTENT | 0 | +-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+ | debian | RUNNING | 10.250.240.111 (eth0) | fd42:46d0:3c40:cca7:216:3eff:feb4:e984 (eth0) | PERSISTENT | 0 | +-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+ | gentoo | RUNNING | 10.250.240.164 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe27:10ca (eth0) | PERSISTENT | 0 | +-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+ | ubuntu | RUNNING | 10.250.240.80 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fedc:f0a6 (eth0) | PERSISTENT | 0 | +-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+ Conclusion

The availability of snapd on other Linux distributions makes it a great way to get the latest LXD running on your distribution of choice.

There are still a number of problems with the LXD snap which may or may not be a blocker for your own use. The main ones at this point are:

  • All containers are shutdown and restarted on upgrades
  • No support for bash completion

If you want non-root users to have access to the LXD daemon. Simply make sure that a “lxd” group exists on your system and add whoever you want to manage LXD into that group, then restart the LXD daemon.

Extra information

The snapd website can be found at: http://snapcraft.io

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Simos Xenitellis: How to completely remove a third-party repository from Ubuntu

Tue, 01/17/2017 - 15:20

Suppose you added a third-party repository of DEB packages in your Ubuntu and you now want to completely remove it, by either downgrading the packages to the official version in Ubuntu or removing them altogether. How do you do that?

Well, if it was a Personal Package Archive (PPA), you would simply use ppa-purge. ppa-purge is not pre-installed in Ubuntu, so we install it with

sudo apt update sudo apt install ppa-purge

Here is the help for ppa-purge:

$ ppa-purge Warning:  Required ppa-name argument was not specified Usage: sudo ppa-purge [options] <ppa:ppaowner>[/ppaname] ppa-purge will reset all packages from a PPA to the standard versions released for your distribution. Options:     -p [ppaname]        PPA name to be disabled (default: ppa)     -o [ppaowner]        PPA owner     -s [host]        Repository server (default: ppa.launchpad.net)     -d [distribution]    Override the default distribution choice.     -y             Pass -y --force-yes to apt-get or -y to aptitude     -i            Reverse preference of apt-get upon aptitude.     -h            Display this help text Example usage commands:     sudo ppa-purge -o xorg-edgers     will remove https://launchpad.net/~xorg-edgers/+archive/ppa     sudo ppa-purge -o sarvatt -p xorg-testing     will remove https://launchpad.net/~sarvatt/+archive/xorg-testing     sudo ppa-purge [ppa:]ubuntu-x-swat/x-updates     will remove https://launchpad.net/~ubuntu-x-swat/+archive/x-updates Notice: If ppa-purge fails for some reason and you wish to try again, (For example: you left synaptic open while attempting to run it) simply uncomment the PPA from your sources, run apt-get update and try again.

Here is an example of ppa-purge that removes a PPA:
Suppose we want to completely uninstall the Official Wine Builds PPA. The URI of the PPA is shown on that page in bold, and it is ppa:wine/wine-builds.

To uninstall this PPA, we run

$ sudo ppa-purge ppa:wine/wine-builds Updating packages lists PPA to be removed: wine wine-builds Package revert list generated: wine-devel- wine-devel-amd64- wine-devel-i386:i386- winehq-devel- Disabling wine PPA from /etc/apt/sources.list.d/wine-ubuntu-wine-builds-xenial.list Updating packages lists ... PPA purged successfully $ _

But how do we completely uninstall the packages of a third-party repository? Those do not have a URI that is similar to the format that ppa-purge needs!

Let’s see an example. If you have an Intel graphics card, you may choose to install their packaged drivers from 01.org. For Ubuntu 16.04, the download page is https://01.org/linuxgraphics/downloads/intel-graphics-update-tool-linux-os-v2.0.2  Yes, they provide a tool that you run on your system and performs a set of checks. Once those checks pass, it adds the Intel repository for Intel Graphics card drivers. You do not see a similar URI from this page, you need to dig deeper after you installed them to find out.

The details of the repository are in /etc/apt/sources.list.d/intellinuxgraphics.list and it is this single line

deb https://download.01.org/gfx/ubuntu/16.04/main xenial main #Intel Graphics drivers

How did we figure out the parameters for ppa-purge? These parameters are just used to identify the correct file in /var/lib/apt/lists/ For the case of the Intel drivers, the relevant files in /var/lib/apt/lists are

/var/lib/apt/lists/download.01.org_gfx_ubuntu_16.04_main_dists_xenial_InRelease
/var/lib/apt/lists/download.01.org_gfx_ubuntu_16.04_main_dists_xenial_main_binary-amd64_Packages
/var/lib/apt/lists/download.01.org_gfx_ubuntu_16.04_main_dists_xenial_main_binary-i386_Packages

The important ones are the *_Packages files. The important source code line in ppa-purge that will help us, is

PPA_LIST=/var/lib/apt/lists/${PPAHOST}_${PPAOWNER}_${PPANAME}_*_Packages

therefore, we select the parameters for ppa-purge accordingly:

-s download.01.org   for   ${PPAHOST} -o gfx  for   ${PPAOWNER} -p ubuntu   for   ${PPANAME}

Now ppa-purge can remove the packages from such a PPA as well, by using these parameters:

sudo ppa-purge -s download.01.org -o gfx -p ubuntu

That’s it!

Pages