Feed aggregator

Stuart Langridge: Niobium

Planet Ubuntu - Mon, 01/30/2017 - 16:20
[41 is] the smallest integer whose reciprocal has a 5-digit repetend. That is a consequence of the fact that 41 is a factor of 99999. — Wikipedia

I don’t understand a lot of things, these days. I don’t understand what a 5-digit repetend is, or why 41 being a factor of 99999 has to do with anything. I don’t understand how much all this has changed in the last thirteen years of posts. I don’t understand when building web stuff got hard. I don’t understand why I can’t find anyone who sells wall lights that look nice without charging a hundred notes for each one, which is a bit steep when you need six. I don’t understand why I can’t get thinner and still eat as many sandwiches as I want. I don’t understand an awful lot of why the world suddenly became a terrible, frightening, mean-spirited, mocking, vitriolic place. And most of what I do understand about that, I hate.

We all sorta thought that we were moving forward; there was less hatred of the Other, fewer knives out, not as much fear and spite as there used to be. And it turns out that it wasn’t gone; it was just suppressed, building up and up underneath the volcano cap until the bad guys realised that there’s nothing actually stopping them doing terrible things and there’s nothing anyone can do about it. So the Tories moved from daring to talk about shutting down the NHS to actually doing it and nobody said anything. Or, more accurately, a bunch of people said things and it didn’t make any difference. Trump starts restricting immigration and targeting Muslims directly and puts a Nazi adviser on the National Security Council and nobody said anything. Or, more accurately, a bunch of people said things and it didn’t make any difference. I don’t want to give in to hatred — it leads to the Dark Side — and so I don’t want to hate them for doing this. But I do hate that I have to fight to avoid it. I hate that I feel so helpless. I hate that the only way I know to fight back is to actually fight — to become them. I hate that they turn everyone into malign, terrible copies of themselves. I hate that they don’t understand. I hate that I don’t understand. I hate that I just hate all the time now.

I’m forty-one. Apparently, according to Wikipedia, the US Navy Fleet Ballistic Missile nuclear submarines from the George Washington, Ethan Allen, Lafayette, James Madison, and Benjamin Franklin classes were nicknamed “41 for Freedom“. 41 for freedom. Maybe that’s not a bad motto for me, being 41. Do more for freedom. My freedom, my family’s freedom, my friends’ freedom, my city’s freedom, people I’ve never met and never will’s freedom. None of us are free if one of us is chained, and if you don’t say it’s wrong then that says it right.

Two photos from today.

One is of Niamh, and her present to me for my birthday: a light box like the ones you get outside cinemas and churches and fast food places and we can put messages for one another on it. I’m hugely pleased with it. The other is of today’s anti-Trump demo in Victoria Square, at which Reverend David Butterworth, of the Methodist Church, said: “Whatever we can do to make this a more peaceful city and a more inclusive city, and to stand up and be counted, we must and should do it together. The only way that Donald Trump will win is if the good people of Birmingham, and of other cities that we’re twinned with like Chicago, stay silent.” People standing up, and a demonstration of what they’re standing up for. Not a bad way to start making me being 41 for freedom, perhaps.

Happy birthday to me. And for those of you less lucky than me today: I hope we can help.

Ubuntu Insights: 48% of people unaware their IoT devices pose a security threat

Planet Ubuntu - Mon, 01/30/2017 - 08:49

This is a guest post by agency Wildfire. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

LONDON, U.K. – 30 January, 2017 – Nearly half (48%) of citizens remain unaware that their connected devices could be infiltrated and used to conduct a cyber attack. That’s according to a new IoT security whitepaper which was published today by Canonical – the makers of Ubuntu.

The report, which includes research from over 2,000 UK citizens, highlights the lack of impact that consumer awareness campaigns are having when it comes to internet security and the internet of things.

Despite the government’s latest online cyber awareness campaign costing over £6 per visit, 37% of consumers still believe that they are not ‘sufficiently aware’ of the risks that connected devices pose. What’s more, consumers seem largely ignorant of the escalating threat demonstrated by the high spike in IoT attacks in 2016. 79% say they have not read or seen any recent news stories regarding IoT security or privacy, and 78% claim that their distrust of IoT security has not increased in the last year.

The research also highlights the limited benefits of better education: Consumers are simply not that motivated to actively apply security updates, with the majority applying them only ‘occasionally’ or in some cases – not at all.

Commenting on these findings, Thibaut Rouffineau, Head of Devices Marketing at Canonical said: “These figures are troubling, and should be a wake-up call for the industry. Despite good intentions, government campaigns for cyber awareness and IoT security still have a long way to go. But then that’s the point: Ultimately the IoT industry needs to step up and take on responsibility. Government education of consumers and legislation will have a part to play, but overall the industry needs to take charge of keeping devices up to date and find a way to eliminate any potential vulnerabilities from devices before they can cause issues, rather than placing the burden on consumers.”

To download Ubuntu’s full ‘Taking charge of the IoT’s security vulnerabilities’ report, visit: http://ubunt.eu/ey14Y2

ENDS

Notes to Editor
‘Connected devices’ were defined to respondents as including e.g. Wi-Fi routers, webcams, smart thermostats or boilers, smart hoovers and other such smart home devices, but excluding computers and phones.

Survey methodology
The survey was conducted on Canonical’s behalf by research company Opinium in December 2016 using a panel of 2,000 UK adults.

About Canonical
Canonical is the company behind Ubuntu, the leading OS for cloud operations. Most public cloud workloads use Ubuntu, as do most new smart gateways, switches, self-driving cars and advanced robots. Canonical provides enterprise support and services for commercial users of Ubuntu.

Established in 2004, Canonical is a privately held company.

For further information please visit https://www.ubuntu.com

Image source here.

Harald Sitter: KDE Applications in Ubuntu Snap Store

Planet Ubuntu - Mon, 01/30/2017 - 07:43

Following the recent addition of easy DBus service snapping in the snap binary bundle format, I am happy to say that we now have some of our KDE Applications in the Ubuntu 16.04 Snap Store.

To use them you need to first manually install the kde-frameworks-5 snap. Once you have it installed you can install the applications. Currently we have available:

  • ktuberling – The most awesome game ever!
  • kbruch – Learn how to do fractions (I almost failed at first exercise :O)
  • katomic – Fun and education in one
  • kblocks – Tetris-like game
  • kmplot – Plotting mathematical functions
  • kgeography – An education application for learning states/countries/capitals
  • kollision – Casual ball game
  • kruler – A screen ruler to measure pixel distance on your screen

The Ubuntu 16.04 software center comes with Snap store support built in, so you can simply search for the application and should find a snap version for installation. As we are still working on stabilizing Snap support in Plasma’s Discover, for now, you have to resort to a terminal to test the snaps on KDE neon.

To get started using the command line interface of snap you can do the following:

sudo snap install kde-frameworks-5 sudo snap install kblocks

All currently available snaps are auto generated. For some technical background check out my earlier blog post on snapping KDE applications. In the near future I hope to get manually maintained snaps also built automatically. Also from-git delivery to the edge channel is very much a desired feature still. Stay tuned.

Ubuntu Insights: Installing a DIY Bare Metal GPU cluster for Kubernetes

Planet Ubuntu - Mon, 01/30/2017 - 05:14

I don’t know if you have ever seen one of the Orange Boxes from Canonical

These are really sleek machines. They contain 10 Intel NUCs, plus an 11th one for the management. They are used as a demonstration tool for big software stacks such as OpenStack, Hadoop, and, of course, Kubernetes.

They are freely available from TranquilPC, so if you are an R&D team, or just interested in having a neat little cluster at home, I encourage you to have a look.

However, despite their immense qualities they lack a critical piece of kit that Deep Learning geeks cherish: GPUs!!

In this blog/tutorial we will learn how to build, install and configure a DIY GPU cluster that uses a similar architecture. We start with hardware selection and experiment, then dive into MAAS (Metal as a Service), a bare metal management system. Finally we look at Juju to deploy Kubernetes on Ubuntu, add CUDA support, and enable it in the cluster.

Hardware: Adding fully fledged GPUs to Intel NUCs?

When you look at them, it is hard to tell how to insert normal GPU cards into the tiny form factor of Intel NUCs. However they have a M.2 NGFF port. This is essentially a PCI-e 4x port, just in a different form factor.

And, there is this which converts M.2 into PCI-e 4x and that which converts PCI-e 4x into 16x.

Sooo… Theoritically, we can connect GPUs to Intel NUCs. Let’s try out!!

POC: First node

Let us start simple with a single Node Intel NUC and see if we can make a GPU to work with it.

Already owning a NUC from the previous generation (NUC5i7SYH) and an old Radeon 7870, I just had to buy

  • a PSU to power the GPU: for this, I found that the Corsair AX1500i was the best deal on the market, capable of powering up to 10 GPUs!! Perfect if I wanted to scale this with more nodes.
  • Adapters:
    M.2 to PCI-e 4x
    Riser 4x -> 16x
  • Hack to activate power on a PSU without having it connected to a “real” motherboard. Thanks to Michael Iatrou (@iatrou) for pointing me there.
  • Obviously a screen, keyboard, cables…

It’s aliiiiiiive! Ubuntu boot screen from the additional GPU running at 4x

At this point, we have proof it is possible, it’s time to start building a real cluster.

Adding little brothers Bill of material

For each of the workers, you will need:

Then for management nodes, 2x of the same above NUCs but without the GPU and with a smaller SSD.

And now overall components:

  • PSU: Corsair AX1500i
  • Switch Netgear GS108PE. You can take a lower end switch, I had one available that’s all. I didn’t do anything funky on the network side.
  • Raspberry Pi: Anything 2 or 3 version with 32GB micro SD
  • Spacers
  • ATX PSU Switch
Execution

If it does not fit in the box, remove the box. So first the motherboard of the NUC has to be extracted. Using a PVC 3mm sheet and the spacers, we can have a nice layout.

GPU View

On one side for the PVC, we attach the GPU so the power connector is visible at the bottom, and the PCI-e port just slightly rises over the edge. The holes are 2.8mm so that the M3 spacers goes through but you need to screw them a little bit and they don’t move.

Intel NUC View

On the other side, we drill the fixation holes fro SSD and Intel NUC so that the PCI-e riser cable is aligned in front of the PCI-e port of the GPU. You’ll also have to drill the SSD metal support a little bit.

As you can see on the picture, we place the riser between the PEC and the NUC so it looks nicer

We repeat the operation 4 times for each node. Then using our 50mm M3 hexa, we attach them with 3 screws between each “blade”, book up everything to the network and… Tadaaaaa!!

Close up view from the NUC side

From the GPU side

Software: Installation of the cluster

Giving life to the cluster will require quite a bit of work on the software side.

Instead of a manual process, we will leverage powerful management tooling. This will give us the ability to re-purpose our cluster in the future.

The tool to manage the metal itself is MAAS (Metal As A Service). It is developed by Canonical to manage bare metal server fleets, and already powers the Ubuntu Orange Box.

Then to deploy, we will be using Juju, Canonical’s modelling tool, which has bundles to deploy the Canonical Distribution of Kubernetes.

Bare Metal Provisioning : Installing MAAS

First of all we need to install MAAS on the Raspberry Pi 2.

For the rest of this, we will assume that you have ready:

  • A Raspberry Pi 2 or 3 installed with Ubuntu Server 16.04
  • The board ethernet port connected to a network that connects to internet, and configured
  • An additional USB to ethernet adapter, connected to our cluster switch
Network setup

The default Ubuntu image does not auto install the USB adapter. First we shall query ifconfig to see a eth1 (or other name) in addition to our eth0 interface:

$ /sbin/ifconfig -a eth0 Link encap:Ethernet HWaddr b8:27:eb:4e:48:c6 inet addr:192.168.1.138 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::ba27:ebff:fe4e:48c6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:150573 errors:0 dropped:0 overruns:0 frame:0 TX packets:39702 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:217430968 (217.4 MB) TX bytes:3450423 (3.4 MB) eth1 Link encap:Ethernet HWaddr 00:0e:c6:c2:e6:82 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

We can now edit /etc/network/interfaces.d/eth1.cfg with

# hwaddress 00:0e:c6:c2:e6:82 auto eth1 iface eth1 inet static address 192.168.23.1 netmask 255.255.255.0

then start eth1 with


$ sudo ifup eth1

and now we have a secondary interface setup

Base installation

Let’s install first the requirements:

$ sudo apt update && sudo apt upgrade -yqq $ sudo apt install -yqq — no-install-recommends \ maas \ bzr \ isc-dhcp-server \ wakeonlan \ amtterm \ wsmancli \ juju \ zram-config

Let’s also use the occasion to fix the very annoying Perl Locales bug that affects pretty much every Rapsberry Pi around:

$ sudo locale-gen en_US.UTF-8

Now let’s activate zram to virtually increase our RAM by 1GB by adding the below in /etc/rc.local

modprobe zram && \ echo $((1024*1024*1024)) | tee /sys/block/zram0/disksize && \ mkswap /dev/zram0 && \ swapon -p 10 /dev/zram0 && \ exit 0

and do an immediate activation via

$ sudo modprobe zram && \ echo $((1024*1024*1024)) | sudo tee /sys/block/zram0/disksize && \ sudo mkswap /dev/zram0 && \ sudo swapon -p 10 /dev/zram0 DHCP Configuration

DHCP will be handled by MAAS directly, so we don’t have to handle it. However the way it configures the default settings is pretty brutal, so you might want to tune that a little bit. Below is a /etc/dhcp/dhcpd.conf file that would work and is a little fancier

authoritative; ddns-update-style none; log-facility local7; option subnet-mask 255.255.255.0; option broadcast-address 192.168.23.255; option routers 192.168.23.1; option domain-name-servers 192.168.23.1; option domain-name “maas”; default-lease-time 600; max-lease-time 7200; subnet 192.168.23.0 netmask 255.255.255.0 { range 192.168.23.10 192.168.23.49; host node00 { hardware ethernet B8:AE:ED:7A:B6:92; fixed-address 192.168.23.10; } ... ... }

We need also to tell dhcpd to only serve requests on eth1 to prevent flowding our other networks. We do that by editing the INTERFACE option in /etc/default/isc-dhcp-server so it looks like

# On what interfaces should the DHCP server (dhcpd) serve DHCP requests? # Separate multiple interfaces with spaces, e.g. “eth0 eth1”. INTERFACES=”eth1"

and finally we restart DHCP with

$ sudo systemctl restart isc-dhcp-server.service

Simple Router Configuration

In our setup, the Raspberry Pi is the point of contention of the network. While MAAS provides DNS and DHCP by default it does not operate as a gateway. Hence our nodes may very well end up blind from the Internet, which we obviously do not want.

So first we activate IP forwarding in sysctl:

sudo touch /etc/sysctl.d/99-maas.conf echo “net.ipv4.ip_forward=1” | sudo tee /etc/sysctl.d/99-maas.conf sudo sh -c “echo 1 > /proc/sys/net/ipv4/ip_forward”

Then we need to link our eth0 and eth1 interfaces to allow traffic between them

$ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE $ sudo iptables -A FORWARD -i eth0 -o eth1 -m state — state RELATED,ESTABLISHED -j ACCEPT $ sudo iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT

OK so now we have traffic passing, which we can test by plugging anything on the LAN interface and trying to ping some internet website.

And we save that in order to make it survive a reboot

sudo sh -c “iptables-save > /etc/iptables.ipv4.nat”

and add this line in /etc/network/interfaces.d/eth1.conf

up iptables-restore < /etc/iptables.ipv4.nat Configuring MAAS Pre requisites $ sudo maas createadmin — username=admin — email=it@madeden.com

Then let’s get access to our API key, and login from the CLI:

$ sudo maas-region apikey — username=admin

Armed with the result of this command, just do:

$ # maas login $ maas login admin http://localhost/MAAS/api/2.0

Or you can just in one command

$ sudo maas-region apikey — username=admin | \ maas login admin http://localhost/MAAS/api/2.0 -

Now via the GUI, in the network tab, we rename our fabrics to match LAN, WAN and WLAN.

Then we hit the LAN network, and via the Take Action button, we enable DHCP on it.

The only thing we have to do is to start the nodes once. They will be handled by MAAS directly, and will appear in the GUI after a few minutes.

They will have a random name, and nothing configured.

First of all we will rename them. To ease things up, in our experiment, we will use node00 and node01 for the first non GPU nodes, then node02 to 04 being the gpu nodes.

After we name them, we will also
* tag them **cpu-only** for the 2 first management nodes
* tag them **gpu** for the 4 workers.
* Set the power method to **Manual**

We then have something like

Commissioning nodes

This is where the fun begins. We need to “commission nodes”, said otherwise to record information about them (HDD, CPU count…)

There is a bug in MAAS that blocks the deployment of systems. Look at comment #14 and apply it by editing /etc/maas/preseeds/curtin_userdata. In the reboot section to add a delay so it looks like:

power_state: mode: reboot delay: 30

Then we commission via the Take Action button and selecting Commission, and leave unticked the all 3 other options. Right after that we manually power each of the nodes, and MAAS will do the rest, including power them down at the end of the process. The UI will then look like:

MAAS commissioning nodes

MAAS from New To Commissioned

When commissioning is successful, we see all the values for HDD size, nb cores and memory filled and the node also becomes Ready

MAAS Logs at commissioning

Deploying with Juju Bootstrapping the environment

First thing we need to do is connect Juju to MAAS. We create a configuration file for MAAS as provider, maas-juju.yaml, with contents:

maas: type: maas auth-types: [oauth1] endpoint: http:///MAAS

Understand the MAAS_IP address as that from which Juju will interact with MAAS, including the nodes that are deployed. So you can use, in our setup, that of eth1 (192.168.23.1 )

You can find more details on this page

Then we need to tell Juju to use MAAS:

$ juju add-cloud maas maas-juju.yaml $ juju add-credential maas

Juju needs to bootstrap which brings up a first control node, which will host the Juju Controller, the initial database and various other requirements. This node is the reason we have 2 management nodes. The second one will be our k8s Master.

In our setup our nodes have only manual power since WoL was removed from MAAS with v2.0. This means we’ll need to trigger the bootstrap, wait for the node to be allocated, then start it manually.

$ juju bootstrap maas-controller maas Creating Juju controller “maas-controller” on maas Bootstrapping model “controller” Starting new instance for initial controller Launching instance # This is where we start the node manually WARNING no architecture was specified, acquiring an arbitrary node — 4y3h8w Installing Juju agent on bootstrap instance Preparing for Juju GUI 2.2.2 release installation Waiting for address Attempting to connect to 192.168.23.2:22 Logging to /var/log/cloud-init-output.log on remote host Running apt-get update Running apt-get upgrade Installing package: curl Installing package: cpu-checker Installing package: bridge-utils Installing package: cloud-utils Installing package: tmux Fetching tools: curl -sSfw ‘tools from %{url_effective} downloaded: HTTP %{http_code}; time %{time_total}s; size %{size_download} bytes; speed %{speed_download} bytes/s ‘ — retry 10 -o $bin/tools.tar.gz <[https://streams.canonical.com/juju/tools/agent/2.0-beta15/juju-2.0-beta15-xenial-amd64.tgz]> Bootstrapping Juju machine agent Starting Juju machine agent (jujud-machine-0) Bootstrap agent installed Bootstrap complete, maas-controller now available.

And the MAAS GUI

Initial bundle deployment

We deploy the bundle file k8s.yaml below:

series: xenial services: “kubernetes-master”: charm: “cs:~containers/kubernetes-master-6” num_units: 1 to: — “0” expose: true annotations: “gui-x”: “800” “gui-y”: “850” constraints: tags=cpu-only flannel: charm: “cs:~containers/flannel-5” annotations: “gui-x”: “450” “gui-y”: “750” easyrsa: charm: “cs:~containers/easyrsa-3” num_units: 1 to: — “0” annotations: “gui-x”: “450” “gui-y”: “550” “kubernetes-worker”: charm: “cs:~containers/kubernetes-worker-8” num_units: 1 to: — “1” expose: true annotations: “gui-x”: “100” “gui-y”: “850” constraints: tags=gpu etcd: charm: “cs:~containers/etcd-14” num_units: 1 to: — “0” annotations: “gui-x”: “800” “gui-y”: “550” relations: — — “kubernetes-master:kube-api-endpoint” — “kubernetes-worker:kube-api-endpoint” — — “kubernetes-master:cluster-dns” — “kubernetes-worker:kube-dns” — — “kubernetes-master:certificates” — “easyrsa:client” — — “kubernetes-master:etcd” — “etcd:db” — — “kubernetes-master:sdn-plugin” — “flannel:host” — — “kubernetes-worker:certificates” — “easyrsa:client” — — “kubernetes-worker:sdn-plugin” — “flannel:host” — — “flannel:etcd” — “etcd:db” machines: “0”: series: xenial “1”: series: xenial

We can see that we have constraints on the nodes to force MAAS to pick up GPU nodes for the workers, and CPU node for the master. We pass the command

$ juju deploy k8s.yaml

That is it. This is the only command we will need to get a functional k8s running!

added charm cs:~containers/easyrsa-3 application easyrsa deployed (charm cs:~containers/easyrsa-3 with the series “xenial” defined by the bundle) added resource easyrsa annotations set for application easyrsa added charm cs:~containers/etcd-14 application etcd deployed (charm cs:~containers/etcd-14 with the series “xenial” defined by the bundle) annotations set for application etcd added charm cs:~containers/flannel-5 application flannel deployed (charm cs:~containers/flannel-5 with the series “xenial” defined by the bundle) added resource flannel annotations set for application flannel added charm cs:~containers/kubernetes-master-6 application kubernetes-master deployed (charm cs:~containers/kubernetes-master-6 with the series “xenial” defined by the bundle) added resource kubernetes application kubernetes-master exposed annotations set for application kubernetes-master added charm cs:~containers/kubernetes-worker-8 application kubernetes-worker deployed (charm cs:~containers/kubernetes-worker-8 with the series “xenial” defined by the bundle) added resource kubernetes application kubernetes-worker exposed annotations set for application kubernetes-worker created new machine 0 for holding easyrsa, etcd and kubernetes-master units created new machine 1 for holding kubernetes-worker unit related kubernetes-master:kube-api-endpoint and kubernetes-worker:kube-api-endpoint related kubernetes-master:cluster-dns and kubernetes-worker:kube-dns related kubernetes-master:certificates and easyrsa:client related kubernetes-master:etcd and etcd:db related kubernetes-master:sdn-plugin and flannel:host related kubernetes-worker:certificates and easyrsa:client related kubernetes-worker:sdn-plugin and flannel:host related flannel:etcd and etcd:db added easyrsa/0 unit to machine 0 added etcd/0 unit to machine 0 added kubernetes-master/0 unit to machine 0 added kubernetes-worker/0 unit to machine 1 deployment of bundle “k8s.yaml” completed

Which translates in the GUI as:

$ juju status MODEL CONTROLLER CLOUD/REGION VERSION default maas-controller maas 2.0-beta15 APP VERSION STATUS EXPOSED ORIGIN CHARM REV OS easyrsa 3.0.1 active false jujucharms easyrsa 3 ubuntu etcd 2.2.5 active false jujucharms etcd 14 ubuntu flannel 0.6.1 false jujucharms flannel 5 ubuntu kubernetes-master 1.4.5 active true jujucharms kubernetes-master 6 ubuntu kubernetes-worker active true jujucharms kubernetes-worker 8 ubuntu RELATION PROVIDES CONSUMES TYPE certificates easyrsa kubernetes-master regular certificates easyrsa kubernetes-worker regular cluster etcd etcd peer etcd etcd flannel regular etcd etcd kubernetes-master regular sdn-plugin flannel kubernetes-master regular sdn-plugin flannel kubernetes-worker regular host kubernetes-master flannel subordinate kube-dns kubernetes-master kubernetes-worker regular host kubernetes-worker flannel subordinate UNIT WORKLOAD AGENT MACHINE PUBLIC-ADDRESS PORTS MESSAGE easyrsa/0 active idle 0 192.168.23.3 Certificate Authority connected. etcd/0 active idle 0 192.168.23.3 2379/tcp Healthy with 1 known peers. (leader) kubernetes-master/0 active idle 0 192.168.23.3 6443/tcp Kubernetes master running. flannel/0 active idle 192.168.23.3 Flannel subnet 10.1.57.1/24 kubernetes-worker/0 active idle 1 192.168.23.4 80/tcp,443/tcp Kubernetes worker running. flannel/1 active idle 192.168.23.4 Flannel subnet 10.1.67.1/24 kubernetes-worker/1 active executing 2 192.168.23.5 (install) Container runtime available. kubernetes-worker/2 unknown allocating 3 192.168.23.7 Waiting for agent initialization to finish kubernetes-worker/3 unknown allocating 4 192.168.23.6 Waiting for agent initialization to finish MACHINE STATE DNS INS-ID SERIES AZ 0 started 192.168.23.3 4y3h8x xenial default 1 started 192.168.23.4 4y3h8y xenial default 2 started 192.168.23.5 4y3ha3 xenial default 3 pending 192.168.23.7 4y3ha6 xenial default 4 pending 192.168.23.6 4y3ha4 xenial default

or

$ juju status MODEL CONTROLLER CLOUD/REGION VERSION default maas-controller maas 2.0-beta15 APP VERSION STATUS EXPOSED ORIGIN CHARM REV OS cuda false local cuda 0 ubuntu easyrsa 3.0.1 active false jujucharms easyrsa 3 ubuntu etcd 2.2.5 active false jujucharms etcd 14 ubuntu flannel 0.6.1 false jujucharms flannel 5 ubuntu kubernetes-master 1.4.5 active true jujucharms kubernetes-master 6 ubuntu kubernetes-worker 1.4.5 active true jujucharms kubernetes-worker 8 ubuntu RELATION PROVIDES CONSUMES TYPE certificates easyrsa kubernetes-master regular certificates easyrsa kubernetes-worker regular cluster etcd etcd peer etcd etcd flannel regular etcd etcd kubernetes-master regular sdn-plugin flannel kubernetes-master regular sdn-plugin flannel kubernetes-worker regular host kubernetes-master flannel subordinate kube-dns kubernetes-master kubernetes-worker regular host kubernetes-worker flannel subordinate UNIT WORKLOAD AGENT MACHINE PUBLIC-ADDRESS PORTS MESSAGE easyrsa/0 active idle 0 192.168.23.3 Certificate Authority connected. etcd/0 active idle 0 192.168.23.3 2379/tcp Healthy with 1 known peers. (leader) kubernetes-master/0 active idle 0 192.168.23.3 6443/tcp Kubernetes master running. flannel/0 active idle 192.168.23.3 Flannel subnet 10.1.57.1/24 kubernetes-worker/0 active idle 1 192.168.23.4 80/tcp,443/tcp Kubernetes worker running. flannel/1 active idle 192.168.23.4 Flannel subnet 10.1.67.1/24 kubernetes-worker/1 active idle 2 192.168.23.5 80/tcp,443/tcp Kubernetes worker running. flannel/2 active idle 192.168.23.5 Flannel subnet 10.1.100.1/24 kubernetes-worker/2 active idle 3 192.168.23.7 80/tcp,443/tcp Kubernetes worker running. flannel/3 active idle 192.168.23.7 Flannel subnet 10.1.14.1/24 kubernetes-worker/3 active idle 4 192.168.23.6 80/tcp,443/tcp Kubernetes worker running. flannel/4 active idle 192.168.23.6 Flannel subnet 10.1.83.1/24 MACHINE STATE DNS INS-ID SERIES AZ 0 started 192.168.23.3 4y3h8x xenial default 1 started 192.168.23.4 4y3h8y xenial default 2 started 192.168.23.5 4y3ha3 xenial default 3 started 192.168.23.7 4y3ha6 xenial default 4 started 192.168.23.6 4y3ha4 xenial default

We now need kubectl to query the cluster. We need to relate to this k8s issue and the method for Hypriot OS

$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - $ cat < /etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF $ apt-get update $ apt-get install -y kubectl $ kubectl get nodes — show-labels NAME STATUS AGE LABELS node02 Ready 1h beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node02 node03 Ready 1h beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node03 node04 Ready 57m beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node04 node05 Ready 58m beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node05 Adding CUDA

CUDA does not have an official charm yet, so I wrote a hacky bash script to make it work, which you can find on GitHub

To build the charm you’ll want a x86 computer rather than the Rpi. You will need juju, charm and charm-tools installed there, then run

$ export JUJU_REPOSITORY=${HOME}/charms $ export LAYER_PATH=${JUJU_REPOSITORY}/layers $ export INTERFACE_PATH=${JUJU_REPOSITORY}/interfaces $ cd ${LAYER_PATH} $ git clone https://github.com/SaMnCo/layer-nvidia-cuda cuda $ cd juju-layer-cuda $ charm build

Which will create a new folder called builds in JUJU_REPOSITORY, and another called cuda in there. Just scp that to the Raspberry Pi in a charms subfolder of your home.

$ scp ${JUJU_REPOSITORY}/builds/cuda ${USER}@raspberrypi:/home/${USER}/charms/cuda $ git clone https://github.com/SaMnCo/layer-nvidia-cuda cuda $ cd juju-layer-cuda $ charm build

To deploy the charm we just created,

$ juju deploy — series xenial $HOME/charms/cuda $ juju add-relation cuda kubernetes-worker

This will take some time (CUDA downloads gigabytes of code and binaries…), but ultimately we get to

$ juju status MODEL CONTROLLER CLOUD/REGION VERSION default maas-controller maas 2.0-beta15 APP VERSION STATUS EXPOSED ORIGIN CHARM REV OS cuda false local cuda 0 ubuntu easyrsa 3.0.1 active false jujucharms easyrsa 3 ubuntu etcd 2.2.5 active false jujucharms etcd 14 ubuntu flannel 0.6.1 false jujucharms flannel 5 ubuntu kubernetes-master 1.4.5 active true jujucharms kubernetes-master 6 ubuntu kubernetes-worker 1.4.5 active true jujucharms kubernetes-worker 8 ubuntu RELATION PROVIDES CONSUMES TYPE juju-info cuda kubernetes-worker regular certificates easyrsa kubernetes-master regular certificates easyrsa kubernetes-worker regular cluster etcd etcd peer etcd etcd flannel regular etcd etcd kubernetes-master regular sdn-plugin flannel kubernetes-master regular sdn-plugin flannel kubernetes-worker regular host kubernetes-master flannel subordinate kube-dns kubernetes-master kubernetes-worker regular juju-info kubernetes-worker cuda subordinate host kubernetes-worker flannel subordinate UNIT WORKLOAD AGENT MACHINE PUBLIC-ADDRESS PORTS MESSAGE easyrsa/0 active idle 0 192.168.23.3 Certificate Authority connected. etcd/0 active idle 0 192.168.23.3 2379/tcp Healthy with 1 known peers. (leader) kubernetes-master/0 active idle 0 192.168.23.3 6443/tcp Kubernetes master running. flannel/0 active idle 192.168.23.3 Flannel subnet 10.1.57.1/24 kubernetes-worker/0 active idle 1 192.168.23.4 80/tcp,443/tcp Kubernetes worker running. cuda/2 active idle 192.168.23.4 CUDA installed and available flannel/1 active idle 192.168.23.4 Flannel subnet 10.1.67.1/24 kubernetes-worker/1 active idle 2 192.168.23.5 80/tcp,443/tcp Kubernetes worker running. cuda/0 active idle 192.168.23.5 CUDA installed and available flannel/2 active idle 192.168.23.5 Flannel subnet 10.1.100.1/24 kubernetes-worker/2 active idle 3 192.168.23.7 80/tcp,443/tcp Kubernetes worker running. cuda/3 active idle 192.168.23.7 CUDA installed and available flannel/3 active idle 192.168.23.7 Flannel subnet 10.1.14.1/24 kubernetes-worker/3 active idle 4 192.168.23.6 80/tcp,443/tcp Kubernetes worker running. cuda/1 active idle 192.168.23.6 CUDA installed and available flannel/4 active idle 192.168.23.6 Flannel subnet 10.1.83.1/24 MACHINE STATE DNS INS-ID SERIES AZ 0 started 192.168.23.3 4y3h8x xenial default 1 started 192.168.23.4 4y3h8y xenial default 2 started 192.168.23.5 4y3ha3 xenial default 3 started 192.168.23.7 4y3ha6 xenial default 4 started 192.168.23.6 4y3ha4 xenial default

Pretty awesome, we now have CUDERNETES!

We can individually connect on every GPU node and run

$ sudo nvidia-smi Wed Nov 9 06:06:44 2016 + — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -+ | NVIDIA-SMI 367.57 Driver Version: 367.57 | | — — — — — — — — — — — — — — — -+ — — — — — — — — — — — + — — — — — — — — — — — + | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 106… Off | 0000:02:00.0 Off | N/A | | 28% 31C P0 27W / 120W | 0MiB / 6072MiB | 0% Default | + — — — — — — — — — — — — — — — -+ — — — — — — — — — — — + — — — — — — — — — — — + + — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | + — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -+

Good job!!

Enabling CUDA in Kubernetes

By default, CDK will not activate GPUs when starting the API server and the Kubelet on workers. We need to do that manually (though this is on the roadmap)

Master Update

On the master node, update /etc/default/kube-apiserver to add:

# Security Context KUBE_ALLOW_PRIV=” — allow-privileged=true”

Then restart the API service via

$ sudo systemctl restart kube-apiserver

So now the Kube API will accept requests to run privileged containers, which are required for GPU workloads.

Worker nodes

On every worker, /etc/default/kubelet to to add the GPU tag, so it looks like:

# Security Context KUBE_ALLOW_PRIV=” — allow-privileged=true” # Add your own! KUBELET_ARGS=” — experimental-nvidia-gpus=1 — require-kubeconfig — kubeconfig=/srv/kubernetes/config — cluster-dns=10.1.0.10 — cluster-domain=cluster.local”

Then restart the service via

$ sudo systemctl restart kubelet Testing the setup

Now that we have CUDA GPUs enabled in k8s, let us test that everything works. We take a very simple job that will just run nvidia-smi from a pod and exit on success.

The job definition is

apiVersion: batch/v1 kind: Job metadata: name: nvidia-smi labels: name: nvidia-smi spec: template: metadata: labels: name: nvidia-smi spec: containers: — name: nvidia-smi image: nvidia/cuda command: [ “nvidia-smi” ] imagePullPolicy: IfNotPresent securityContext: privileged: true resources: requests: alpha.kubernetes.io/nvidia-gpu: 1 limits: alpha.kubernetes.io/nvidia-gpu: 1 volumeMounts: — mountPath: /dev/nvidia0 name: nvidia0 — mountPath: /dev/nvidiactl name: nvidiactl — mountPath: /dev/nvidia-uvm name: nvidia-uvm — mountPath: /usr/local/nvidia/bin name: bin — mountPath: /usr/lib/nvidia name: lib volumes: — name: nvidia0 hostPath: path: /dev/nvidia0 — name: nvidiactl hostPath: path: /dev/nvidiactl — name: nvidia-uvm hostPath: path: /dev/nvidia-uvm — name: bin hostPath: path: /usr/lib/nvidia-367/bin — name: lib hostPath: path: /usr/lib/nvidia-367 restartPolicy: Never

What is interesting here is

  • We do not have the abstraction provided by nvidia-docker, so we have to specify manually the mount points for the char devices
  • We also need to share the drivers and libs folders
    In the resources, we have to both request and limit the resources with 1 GPU
  • The container has to run privileged

Now if we run this:

$ kubectl create -f nvidia-smi-job.yaml $ # Wait for a few seconds so the cluster can download and run the container $ kubectl get pods -a -o wide NAME READY STATUS RESTARTS AGE IP NODE default-http-backend-8lyre 1/1 Running 0 11h 10.1.67.2 node02 nginx-ingress-controller-bjplg 1/1 Running 1 10h 10.1.83.2 node04 nginx-ingress-controller-etalt 0/1 Pending 0 6m nginx-ingress-controller-q2eiz 1/1 Running 0 10h 10.1.14.2 node05 nginx-ingress-controller-ulsbp 1/1 Running 0 11h 10.1.67.3 node02 nvidia-smi-xjl6y 0/1 Completed 0 5m 10.1.14.3 node05

We see the last container has run and completed. Let us see the output of the run

$ kubectl logs nvidia-smi-xjl6y Wed Nov 9 07:52:42 2016 + — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -+ | NVIDIA-SMI 367.57 Driver Version: 367.57 | | — — — — — — — — — — — — — — — -+ — — — — — — — — — — — + — — — — — — — — — — — + | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 106… Off | 0000:02:00.0 Off | N/A | | 28% 33C P0 29W / 120W | 0MiB / 6072MiB | 0% Default | + — — — — — — — — — — — — — — — -+ — — — — — — — — — — — + — — — — — — — — — — — + + — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | + — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -+

Perfect, we have the same result as if we had run nvidia-smi from the host, which means we are all good to operate GPUs!

Conclusion

So what did we achieve here? Kubernetes is the most versatile container management system around. It also shares some genes with Tensorflow, which itself got often demoed as containers, in a scale out fashion.

It is only natural to fasten Deep Learning workload by adding GPUs at scale. This poor man GPU cluster is an example of what can be done for a small R&D team if they wanted to experiment with multi node scalability.

We have a secondary benefit. You noticed that the deployment of k8s is completely automated here (beside the GPU inclusion), thanks to Juju and the team behind CDK. Well, the community behind Juju creates many charms, and there is a large collection of scale out applications that can be deployed at scale, like Hadoop, Kafka, Spark, Elasticsearch (…).

In the end, the investment is only MAAS and a few commands. Juju’s ROI in R&D is a matter of days.

Thanks

Huge thanks to Marco Ceppi, Chuck Butler and Matt Bruzek at Canonical for the fantastic work on CDK, and responsiveness to my (numerous) questions.

Dimitri John Ledkov: 2017 is the new 1984

Planet Ubuntu - Sun, 01/29/2017 - 15:23
1984: Library EditionNovel by George Orwell, cover picture by Google Search resultI am scared.
I am petrified.
I am confused.
I am sad.
I am furious.
I am angry.

28 days later I shall return from NYC.

I hope.

We’re looking for Ubuntu 17.04 wallpapers right now!

The Fridge - Sat, 01/28/2017 - 19:10

Ubuntu is a testament to the power of sharing, and we use the default selection of desktop wallpapers in each release as a way to celebrate the larger Free Culture movement. Talented artists across the globe create media and release it under licenses that don’t simply allow, but cheerfully encourage sharing and adaptation. This cycle’s Free Culture Showcase for Ubuntu 17.04 is now underway!

We’re halfway to the next LTS, and we’re looking for beautiful wallpaper images that will literally set the backdrop for new users as they use Ubuntu 17.04 every day. Whether on the desktop, phone, or tablet, your photo or can be the first thing Ubuntu users see whenever they are greeted by the ubiquitous Ubuntu welcome screen or access their desktop.

Submissions will be handled via Flickr at the Ubuntu 17.04 Free Culture Showcase – Wallpapers group, and the submission window begins now and ends on March 5th.

More information about the Free Culture Showcase is available on the Ubuntu wiki at https://wiki.ubuntu.com/UbuntuFreeCultureShowcase.

I’m looking forward to seeing the 10 photos and 2 illustrations that will ship on all graphical Ubuntu 17.04-based systems and devices on April 13th!

Originally published here on Sat Jan 28 by Nathan Haines

Julian Fernandes: Hello world!

Planet Ubuntu - Sat, 01/28/2017 - 19:01

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

Kubuntu General News: Kubuntu 17.04 Alpha 2 released for testers

Planet Ubuntu - Sat, 01/28/2017 - 13:58

Today the Kubuntu team is happy to announce that Kubuntu Zesty Zapus (17.04) is released today. With this Alpha 2 pre-release, you can see what we are trying out in preparation for 17.04, which we will be releasing in April.

NOTE: This is Alpha 2 Release. Kubuntu Alpha Releases are NOT recommended for:

* Regular users who are not aware of pre-release issues
* Anyone who needs a stable system
* Anyone uncomfortable running a possibly frequently broken system
* Anyone in a production environment with data or work-flows that need to be reliable

Getting Kubuntu 17.04 Alpha 2
* Upgrade from 16.10, run do-release-upgrade from a command line.
* Download a bootable image (ISO) and put it onto a DVD or USB Drive

Lubuntu Blog: Zesty Zapus Alpha 2 released

Planet Ubuntu - Sat, 01/28/2017 - 12:15
The second alpha of the Zesty Zapus (to become 17.04) has now been released! This milestone features images for Lubuntu, Kubuntu, Ubuntu MATE, Ubuntu Kylin, Ubuntu GNOME, and Ubuntu Budgie. Pre-releases of the Zesty Zapus are *not* encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent […]

Ubuntu Insights: Ubuntu Core – how to enable aliases for your snaps commands

Planet Ubuntu - Sat, 01/28/2017 - 04:00

We are happy to announce that a new version of Ubuntu Core, based on snapd 2.21, has been released to the stable snaps channel yesterday.

As with any stable release, your Ubuntu Core devices will update and reboot automatically. If you are using snaps on the desktop, the release will reach you through a snapd package update on Ubuntu 16.04, 16.10 and 17.04.

This release comes with several improvements you can read about in the changelog, but let’s focus on a big feature that will help people who snap very large software, especially software that comes with many commands (such as OpenStack, ImageMagick, most databases…) and their users.

Introducing snap aliases

When you launch a snap from the command line, you need to use the name of the snap, then the name of a command it contains. In most cases, you don’t notice it, because snapd simplifies the process by collapsing <snap-name>.<command-name> into <command-name>, when both are the same. This way, you don’t need to type inkscape.inkscape, but simply inkscape and get a familiar software experience.

But when a snap contains multiple commands, with various names, things can become less familiar. If we take the PostgreSQL snap as an example, we can see it comes with many commands: initdb, createdb, etc. In this case, you have to run postgresql96.initdb, postgresql96.createdb, etc.

The alias feature of snapd lets snaps declare their own aliases, for users to manually enable after install or for snap stores to declare as “auto-aliases” that will be enabled upon install.

How to enable aliases

To have an overview of all available aliases on a system, you can use the snap aliases command.

$ snap aliases App Alias Notes firefox-devel.firefox firefox -

You can see I have a snap with the name firefox-devel, containing a firefox command and a firefox alias.

I can either use firefox-devel.firefox as my command to launch Firefox, or use snap alias <snap-name> <alias> to enable the alias.

$ snap alias firefox-devel firefox $ snap aliases App Alias Notes firefox-devel.firefox firefox enabled

I can now launch my firefox-devel snap with the firefox command.

You can also use snap unalias to disable aliases for a specific snap.

How to declare an alias

Declaring a new alias in your snap is as easy as adding one more entry to your snapcraft.yaml apps keys.

$ cat firefox-devel/snapcraft.yaml [...] apps: firefox-devel: command: bin/firefox aliases: [firefox] [...]

That’s it, heads-on to tutorials.ubuntu.com to make your own snap from scratch and give aliases a try!

Nathan Haines: We're looking for Ubuntu 17.04 wallpapers right now!

Planet Ubuntu - Sat, 01/28/2017 - 01:08
We're looking for Ubuntu 17.04 wallpapers right now!

Ubuntu is a testament to the power of sharing, and we use the default selection of desktop wallpapers in each release as a way to celebrate the larger Free Culture movement. Talented artists across the globe create media and release it under licenses that don't simply allow, but cheerfully encourage sharing and adaptation. This cycle's Free Culture Showcase for Ubuntu 17.04 is now underway!

We're halfway to the next LTS, and we're looking for beautiful wallpaper images that will literally set the backdrop for new users as they use Ubuntu 17.04 every day. Whether on the desktop, phone, or tablet, your photo or can be the first thing Ubuntu users see whenever they are greeted by the ubiquitous Ubuntu welcome screen or access their desktop.

Submissions will be handled via Flickr at the Ubuntu 17.04 Free Culture Showcase - Wallpapers group, and the submission window begins now and ends on March 5th.

More information about the Free Culture Showcase is available on the Ubuntu wiki at https://wiki.ubuntu.com/UbuntuFreeCultureShowcase.

I'm looking forward to seeing the 10 photos and 2 illustrations that will ship on all graphical Ubuntu 17.04-based systems and devices on April 13th!

The Fridge: Zesty Zapus Alpha 2 Released

Planet Ubuntu - Fri, 01/27/2017 - 20:23

“Without deviation from the norm, progress is not possible.”

― Frank Zapus

The second alpha of the Zesty Zapus (to become 17.04) has now been released!

This milestone features images for Lubuntu, Kubuntu, Ubuntu MATE, Ubuntu Kylin, Ubuntu GNOME, and Ubuntu Budgie.

Pre-releases of the Zesty Zapus are not encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent breakage. They are, however, recommended for Ubuntu flavor developers and those who want to help in testing, reporting and fixing bugs as we work towards getting this release ready.

Alpha 2 includes a number of software updates that are ready for wider testing. This is still an early set of images, so you should expect some bugs.

While these Alpha 2 images have been tested and work, except as noted in the release notes, Ubuntu developers are continuing to improve the Zesty Zapus. In particular, once newer daily images are available, system installation bugs identified in the Alpha 2 installer should be verified against the current daily image before being reported in Launchpad. Using an obsolete image to re-report bugs that have already been fixed wastes your time and the time of developers who are busy trying to make 17.04 the best Ubuntu release yet. Always ensure your system is up to date before reporting bugs.

Lubuntu

Lubuntu is a flavor of Ubuntu based on LXDE and focused on providing a very lightweight distribution.

The Lubuntu 17.04 Alpha 2 images can be downloaded from:

More information about Lubuntu 17.04 Alpha 2 can be found here:

Ubuntu MATE

Ubuntu MATE is a flavor of Ubuntu featuring the MATE desktop environment for people who just want to get stuff done.

The Ubuntu MATE 17.04 Alpha 2 images can be downloaded from:

More information about Ubuntu MATE 17.04 Alpha 2 can be found here:

Ubuntu Kylin

Ubuntu Kylin is a flavor of Ubuntu that is more suitable for Chinese users.

The Ubuntu Kylin 17.04 Alpha 2 images can be downloaded from:

More information about Ubuntu Kylin 17.04 Alpha 2 can be found here:

Kubuntu

Kubuntu is the KDE based flavor of Ubuntu. It uses the Plasma desktop and includes a wide selection of tools from the KDE project.

The Kubuntu 17.04 Alpha 2 images can be downloaded from:

More information about Kubuntu 17.04 Alpha 2 can be found here:

Ubuntu GNOME

Ubuntu GNOME is a flavor of Ubuntu featuring the GNOME desktop environment.

The Ubuntu GNOME 17.04 Alpha 2 images can be downloaded from:

More information about Ubuntu GNOME 17.04 Alpha 2 can be found here:

Ubuntu Budgie

Ubuntu Budgie is a flavor of Ubuntu featuring the Budgie desktop environment.

The Ubuntu Budgie 17.04 Alpha 2 images can be downloaded from:

More information about Ubuntu Budgie 17.04 Alpha 2 can be found here:

If you’re interested in following the changes as we further develop the Zesty Zapus, we suggest that you subscribe to the ubuntu-devel-announce list. This is a low-traffic list (a few posts a month or less) carrying announcements of approved specifications, policy changes, alpha releases, and other interesting events.

A big thank you to the developers and testers for their efforts to pull together this Alpha release, and welcome Ubuntu Budgie!

Originally posted to the ubuntu-devel-announce mailing list on Fri Jan 27 21:16:28 UTC 2017 by Simon Quigley on behalf of the Ubuntu Release Team

Zesty Zapus Alpha 2 Released

The Fridge - Fri, 01/27/2017 - 20:10

“Without deviation from the norm, progress is not possible.”

― Frank Zapus

The second alpha of the Zesty Zapus (to become 17.04) has now been released!

This milestone features images for Lubuntu, Kubuntu, Ubuntu MATE, Ubuntu Kylin, Ubuntu GNOME, and Ubuntu Budgie.

Pre-releases of the Zesty Zapus are not encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent breakage. They are, however, recommended for Ubuntu flavor developers and those who want to help in testing, reporting and fixing bugs as we work towards getting this release ready.

Alpha 2 includes a number of software updates that are ready for wider testing. This is still an early set of images, so you should expect some bugs.

While these Alpha 2 images have been tested and work, except as noted in the release notes, Ubuntu developers are continuing to improve the Zesty Zapus. In particular, once newer daily images are available, system installation bugs identified in the Alpha 2 installer should be verified against the current daily image before being reported in Launchpad. Using an obsolete image to re-report bugs that have already been fixed wastes your time and the time of developers who are busy trying to make 17.04 the best Ubuntu release yet. Always ensure your system is up to date before reporting bugs.

Lubuntu

Lubuntu is a flavor of Ubuntu based on LXDE and focused on providing a very lightweight distribution.

The Lubuntu 17.04 Alpha 2 images can be downloaded from:

More information about Lubuntu 17.04 Alpha 2 can be found here:

Ubuntu MATE

Ubuntu MATE is a flavor of Ubuntu featuring the MATE desktop environment for people who just want to get stuff done.

The Ubuntu MATE 17.04 Alpha 2 images can be downloaded from:

More information about Ubuntu MATE 17.04 Alpha 2 can be found here:

Ubuntu Kylin

Ubuntu Kylin is a flavor of Ubuntu that is more suitable for Chinese users.

The Ubuntu Kylin 17.04 Alpha 2 images can be downloaded from:

More information about Ubuntu Kylin 17.04 Alpha 2 can be found here:

Kubuntu

Kubuntu is the KDE based flavor of Ubuntu. It uses the Plasma desktop and includes a wide selection of tools from the KDE project.

The Kubuntu 17.04 Alpha 2 images can be downloaded from:

More information about Kubuntu 17.04 Alpha 2 can be found here:

Ubuntu GNOME

Ubuntu GNOME is a flavor of Ubuntu featuring the GNOME desktop environment.

The Ubuntu GNOME 17.04 Alpha 2 images can be downloaded from:

More information about Ubuntu GNOME 17.04 Alpha 2 can be found here:

Ubuntu Budgie

Ubuntu Budgie is a flavor of Ubuntu featuring the Budgie desktop environment.

The Ubuntu Budgie 17.04 Alpha 2 images can be downloaded from:

More information about Ubuntu Budgie 17.04 Alpha 2 can be found here:

If you’re interested in following the changes as we further develop the Zesty Zapus, we suggest that you subscribe to the ubuntu-devel-announce list. This is a low-traffic list (a few posts a month or less) carrying announcements of approved specifications, policy changes, alpha releases, and other interesting events.

A big thank you to the developers and testers for their efforts to pull together this Alpha release, and welcome Ubuntu Budgie!

Originally posted to the ubuntu-devel-announce mailing list on Fri Jan 27 21:16:28 UTC 2017 by Simon Quigley on behalf of the Ubuntu Release Team

Ubuntu Insights: Award-winning drone technology with Ubuntu

Planet Ubuntu - Fri, 01/27/2017 - 10:28

The market for drones is exploding as businesses and individuals embrace them. The global market for commercial applications of drone technology will balloon to as much as $127 billion by 2020 up from £2billion today (PWC.) Aerotenna is one of those innovators making this vision a reality.

Aerotenna’s award-winning technology seeks to solve the UAV autonomous flight-challenges – preventing UAVs from colliding with non-cooperative objects or other UAVs. Check out this video that shows it in action:

Autonomous Collision Avoidance- Mission from Aerotenna on Vimeo.

To learn more about Aerotenna’s award-winning technology download the case-study below. Highlights include:

  • Partnering with Intel® and Xilinx®, Aerotenna developed and released OcPoC with Altera Cyclone and Xilinx Zynq, with an industry-leading 100+ I/Os for sensor integration, and FPGA for sensor fusion, real-time data processing and deep learning
  • One such sensor is Aerotenna’s microwave radar that allows the drone to detect surrounding objects in all light conditions and environments, important for safe flying of UAVs
  • Ubuntu powers the OcPoC giving developers a familiar, extensible platform to build drone solutions based on the powerful combination of multiple sensors and complex robotics algorithms

Download the case study

Ubuntu Insights: ROS on arm64 with Ubuntu Core

Planet Ubuntu - Fri, 01/27/2017 - 10:03

This is a guest post by Kyle Fazzari, Engineer from Canonical. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

Previous Robot Operating System (ROS) releases only supported i386, amd64, and armhf. I even tried building ROS Indigo from source for arm64 about a year ago, but ran into dependency issues with a missing sbcl. Well, with surprisingly little fanfare, ROS Kinetic was released with support for arm64 in their prebuilt archive! I thought it might be time to take it for a spin with Ubuntu Core and its arm64 reference board, the DragonBoard 410c, and thought I’d take you along for the ride.

Step 1: Install Ubuntu Core

The first order of business is to install Ubuntu Core on the DragonBoard. Just follow the documentation. I want to mention two things for this step, though:

  1. I’ve never had good luck with wifi on the DragonBoard, it seems incredibly unstable. I recommend not even bothering with it and using a USB to ethernet adapter, since with Ubuntu Core at least your first login must be over SSH.
  2. There’s a bug that causes the first boot wizard to take quite some time between entering your SSO password and finishing. Don’t worry, just leave it alone, it’ll finish (mine took about 7 minutes).
Step 2: Make sure it’s up-to-date

SSH into the machine (or if you set a password, login locally, it doesn’t matter), and run the following command to ensure everything is completely up-to-date:

$ sudo snap refresh

If it updated, you may need to reboot. Go ahead and do that now, we’ll wait.

Step 3: Install Snapcraft

You may or may not be familiar with Ubuntu Core, but the first thing people typically notice is that is doesn’t use Debian packages (.debs), it uses snaps (read up on them, they’re pretty awesome). However, in this case, they don’t give us what we need, which is a development environment. We need to build a ROS workspace into a snap, which means we’ll need to utilize ROS’s archive as well as the snap packaging tool, snapcraft. All of these are available as .debs. Fortunately, the environment we want is available as a snap:

$ snap install classic --edge --devmode $ sudo classic <unpacking stuff... snip> (classic)kyrofa@localhost:~$

The (classic) prompt modifier tells you that you’re now in a classic shell. Now you can do familiar things such as update the package index and install Snapcraft, both of which you should do now:

(classic)kyrofa@localhost:~$ sudo apt update (classic)kyrofa@localhost:~$ sudo apt install snapcraft Step 4: Workaround bug #1650207

Take a look at your Linux workstation (not the DragonBoard). For example, I’m running Ubuntu Xenial. The contents of my /etc/lsb-release file look like this:

DISTRIB_ID=Ubuntu DISTRIB_RELEASE=16.04 DISTRIB_CODENAME=xenial DISTRIB_DESCRIPTION="Ubuntu 16.04.1 LTS"

Multiple utilities, include Snapcraft and Catkin (the build system for ROS) use this file to determine the OS upon which it’s running. This file looks a bit different on Ubuntu Core:

DISTRIB_ID="Ubuntu Core" DISTRIB_RELEASE=16 DISTRIB_DESCRIPTION="Ubuntu Core 16"

As of this writing, due to a bug, the classic shell doesn’t replace this file with one that looks like Xenial’s, which means that neither Snapcraft nor Catkin (and various other tools) will work correctly. Fortunately that file is writable, so we can work around that just by making Ubuntu Core’s /etc/lsb-release looks like Xenial’s:

(classic)kyrofa@localhost:~$ cat << EOF | sudo tee /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=16.04 DISTRIB_CODENAME=xenial DISTRIB_DESCRIPTION="Ubuntu 16.04.1 LTS" EOF Step 5: Build your snap

Probably the most simple ROS workspace you can imagine is a single talker and a single listener. Snapcraft has exactly that as one of its demos, so for this example we’ll just use that (you might consider reading that demo’s walkthrough). First though, that means fetching the Snapcraft source code:

(classic)kyrofa@localhost:~$ sudo apt install git (classic)kyrofa@localhost:~$ git clone https://github.com/snapcore/snapcraft (classic)kyrofa@localhost:~$ cd snapcraft/demos/ros

The ROS demo in question will by default actually build for Indigo. But you’re interested in arm64! You can’t use Indigo. You need something newer… something shinier, and perhaps less purple. You need to use Kinetic. You tell Snapcraft this by utilizing the rosdistro property. Open up the snapcraft.yaml and edit the appropriate section to look like this (added text bolded):

[...] parts: ros-project: plugin: catkin source: . rosdistro: kinetic catkin-packages: - talker - listener include-roscore: true

Save that file and exit. Finally, it’s time to run snapcraft and watch that workspace get built into a snap (this will take a bit, the DragonBoard is not a blazing workhorse; if you want something faster use the snap builders):

(classic)kyrofa@localhost:~/snapcraft/demos/ros$ snapcraft <snip pulling, building, staging> Priming ros-project Snapping 'ros-example' Snapped ros-example_1.0_arm64.snap

In the end you have a snap. You should exit out of the classic shell at this point (e.g. ctrl+d).

Step 6: Install and run it!

Now it’s time to install the snap you just built. Even though you’ve exited the classic shell, your $HOME remained the same, so you can install the snap like so:

$ cd snapcraft/demos/ros $ sudo snap install --dangerous ros-example_1.0_arm64.snap

(remember the –dangerous flag is because you just built the snap locally, so snapd can’t verify its publisher. You’re telling it that’s okay.)

Finally, run the application contained within the snap (it’ll launch the talker/listener system):

$ ros-example.launch-project <snip> SUMMARY ======== PARAMETERS * /rosdistro: kinetic * /rosversion: 1.12.6 NODES / listener (listener/listener_node) talker (talker/talker_node) auto-starting new master <snip> process[talker-2]: started with pid [25827] process[listener-3]: started with pid [25828] [ INFO] [1485394763.340461416]: Hello world 0 [ INFO] [1485394763.440354547]: Hello world 1 [ INFO] [1485394763.540334917]: Hello world 2 [ INFO] [1485394763.640330599]: Hello world 3 [ INFO] [1485394763.740335917]: Hello world 4 [ INFO] [1485394763.840366912]: Hello world 5 [ INFO] [1485394763.940342594]: Hello world 6 [ INFO] [1485394764.040321141]: Hello world 7 [ INFO] [1485394764.140323334]: Hello world 8 [ INFO] [1485394764.240328548]: Hello world 9 [ INFO] [1485394764.340319074]: Hello world 10 [INFO] [1485394764.341486]: I heard Hello world 10 [ INFO] [1485394764.440333194]: Hello world 11 [INFO] [1485394764.441476]: I heard Hello world 11 [ INFO] [1485394764.540333772]: Hello world 12 [INFO] [1485394764.541450]: I heard Hello world 12 Conclusion

I haven’t yet pushed ROS’s arm64 support very hard, but I’m thoroughly pleased that support is present. Particularly paired with snaps and Ubuntu Core, I think this opens the door to a lot of amazing robotic possibilities.

Original post can be found here.

Rhonda D&#39;Vine: Icona Pop

Planet Ubuntu - Fri, 01/27/2017 - 06:22

Last fall I went to a Silent Disco event. You get wireless headphones, a DJane and a DJ were playing music on different channels, and you enjoy the time with people around who can't hear what you hear. It's a pretty funny experience, and it was one of the last warm sunny days. There I heard a song that was just in the mood for the moment, and made me looking up the band to listen more closely to them.

The band was Icona Pop, they have a mood enlighening pop sound that cheers you up. Here are the songs I want to present you today:

  • I Love It: The first song I heard from them, and I Love It!
  • Girlfriend: Sweet song, and probably part of the reason they are well received in the LGBTIQ community.
  • All Night: A song/video with a message.

Like always, enjoy!

/music | permanent link | Comments: 0 |

Ubuntu Insights: Using the ubuntu-app-platform content interface in app snaps

Planet Ubuntu - Thu, 01/26/2017 - 08:44

This is a guest post by Olivier Tilloy, Engineer at Canonical. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

Recently the ubuntu-app-platform snap has been made available in the store for application developers to build their snaps without bundling all their dependencies. The ubuntu-app-platform snap includes standard Qt libraries (version 5.6.1 as of this writing) and QML runtime, the ubuntu UI toolkit and related dependencies, and oxide (a web engine based on the chromium content API and its QML bindings).

This allows app developers to declare a dependency on this snap through the content sharing mechanism, thus reducing dramatically the size of the resulting app snaps.

I went through the exercise with the webbrowser-app snap. This proved surprisingly easy and the size of the snap (amd64 architecture) went down from 136MB to 22MB, a sizeable saving!

For those interested in the details, here are the actual changes in the snapcraft.yaml file: https://bazaar.launchpad.net/~phablet-team/webbrowser-app/staging/revision/1576

Essentially they consist in:

  • Using the ‘platform’ plug (content interface) and specifying its default provider (‘ubuntu-app-platform’)
  • Removing pretty much all stage packages
  • Adding an implicit dependency on the ’desktop-ubuntu-app-platform’ wiki part
  • Adding an empty ‘ubuntu-app-platform’ directory in the snap where snapd will bind-mount the content shared by the ubuntu-app-platform snap

Note that the resulting snap could be made even smaller. There is a known bug in snapcraft where it uses ldd to crawl the dependencies, ignoring the fact that those dependencies are already present in the ubuntu-app-platform snap.

Also note that if your app depends on any Qt module that isn’t bundled with ubuntu-app-platform, you will need to add it to the stage packages of your snap, and this is likely to bring in all the Qt dependencies, thus duplicating them. The easy fix for this situation is to override snapcraft’s default behaviour by specifying which files the part should install, using the “snap” section (see what was done for e.g. address-book-app at https://code.launchpad.net/~renatofilho/address-book-app/ubuntu-app-platform/+merge/311351).

Harald Sitter: KDE Slimbook

Planet Ubuntu - Thu, 01/26/2017 - 05:32

The past couple of months an elite team of KDE contributors worked on a top-secret project. Today we finally announced it to the public.

The KDE Slimbook

Together with the Spanish laptop retailer Slimbook we created our very first KDE laptop. It looks super slick and sports an ever so sexy KDE Slimbook logo on the back of the screen. It will initially come with KDE neon as operating system.

Naturally, as one of the neon developers, I was doing some software work to help this along. Last year already we switched to a more reliable graphics driver. Our installer got a face-lift to make it more visually appealing. The installer gained an actually working OEM installation mode. A hardware integration feature was added to our package pool to make sure the KDE Slimbook works perfectly out of the box.
The device looks and feels awesome. Plasma’s stellar look and feel complements it very well making for a perfect overall experience.

I am super excited and can’t wait for more people to get their hands on it, so we get closer to a world in which everyone has control over their digital life and enjoys freedom and privacy, thanks to KDE.

Stephan Adig: Dear OpenStack Foundation

Planet Ubuntu - Thu, 01/26/2017 - 02:39

why do I need to be an OpenStack Foundation Member when I want to send you a bugfix via PR on GitHub?

I don't wanna work on OpenStack per se, I just want to use one of your little utils from your stack and it doesn't work as expected under a newer version of Python :)

It would be nice, if the barrier to contribute could be lowered.

Meerkat: The Perfect C Array Library

Planet Ubuntu - Thu, 01/26/2017 - 01:38

I love C. And I loathe C++.

But there’s one thing I like about C++: The fact that I don’t have to write my own dynamic array libraries each time I try to start a project.

Of course, there are many libraries that exist for working with arrays in C. Glib, Eina, DynArray, etc. But I wanted something as easy to use as C++’s std::vector, with the performance and memory usage of std::vector.

By the way, I am not talking about algorithmic performance. I’m writing this assuming the algorithms are identical (i.e. I’m writing purely about implementation differences).

There is a few problems with the performance and memory usage of the aforementioned libraries, the major one being that the element size is stored as a structure member. Which means an extra 4-8 bytes per array, and constantly having to read a variable (which means many missed optimization opportunities). While this may not sound too bad (and in the grand scheme of things, probably isn’t), it is undeniably less efficient than C++.

This isn’t the only problem, there are other missed optimization opportunities in the function (vs macro)-based variants, for example, calling functions for tiny operations, calling memcpy for types that fit within registers, etc.

All of this might seem like splitting hairs, and it probably is. But knowing that C++ can be faster, more memory efficient, and less bothersome to code in than C is not a thought I like very much. So I wanted to try to level the playing field.

It took me a rather long amount of sporadic work for me to create my very own “Perfect C Array Library”, that, I thought, fulfilled my requirements.

First, let’s look at some example code using it:

array(int) myarray = array_init(); array_push(myarray, 5); array_push(myarray, 6); for (int i = 0; i < myarray.length; i++) { printf("%i\n", myarray.data[i]); } array_free(myarray);

Alright, it might be a tiny bit less pretty than C++. But hey, this is good enough for me.

In terms of performance and memory issues, I fixed the issues I wrote above. So in theory, it should be just as fast as C++, right?

Turns out I missed one issue. Cache Misses. In my mind, if everything was written as a macro, it would, in theory, be faster than functions. I was wrong. Large portions of code inlined can result in cache misses, which will quite negatively impact the performance of the function.

So, as far as I can see, it is impossible to write a set of array functions for C that will be as fast and easy to use as C++’s std::vector. But please correct me if I’m wrong!

With that being said, this implementation is the most efficient I’ve been able to write so far, so let me show you the idea behind it:

#define array(type) \ struct { \ type* data; \ size_t length; \ } #define array_init() \ { \ .data = NULL; \ .length = 0; \ } #define array_free(array) \ do { \ free(array.data); \ array.data = NULL; \ array.length = 0; \ } while (0) #define array_push(array, element) \ do { \ array.data = realloc(array.data, \ sizeof(*array.data) * \ (array.length + 1)); \ array.data[array.length] = element; \ array.length++; \ } while (0)

The magic is in sizeof(*array.data). For some reason I never knew this was legal in C, but it does exactly what it says it does: it returns the size of type. Which eliminates the need to store this in the struct.

The code above is oversimplified to demonstrate the idea. It’s very incomplete, algorithmically slow, and unsafe. But the idea is there.

To summarize, I am not aware of any way to write a completely zero-compromise array library in C, but the code above shows the closest I’ve come to that.

 

P.S. There is one problem I am aware of with this method:

array(int) myarray; array(int) myarray1 = myarray; /* 'error: invalid initializer' */

There are 2 ways to get around this:

memcpy(&myarray1, &myarray, sizeof(myarray)); /* or */ myarray1 = *((typeof(myarray1)*)&myarray); /* requires GNU C */

Both of which should, under a decent optimization level, result in the same assembly.


Pages

Subscribe to Ubuntu Arizona LoCo Team aggregator