This is a valid RSS feed.
This feed is valid, but interoperability with the widest range of feed readers could be improved by implementing the following recommendations.
line 81, column 0: (9 occurrences) [help]
<description><div class="separator" style="clear: both; t ...
line 81, column 0: (9 occurrences) [help]
<description><div class="separator" style="clear: both; t ...
line 1316, column 0: (2 occurrences) [help]
<description><div dir="ltr" style="text-align: left;" ...
<title>Rhonda D&#39;Vine: #metoo</title>
^
</channel>
<?xml version="1.0"?><rss version="2.0"> <channel> <title>Planet Ubuntu</title> <link>http://planet.ubuntu.com/</link> <language>en</language> <description>Planet Ubuntu - http://planet.ubuntu.com/</description> <item> <title>Stuart Langridge: Happy 45th Anniversary, mum and dad</title> <guid isPermaLink="false">tag://www.kryogenix.org/days,2018-01-06:2018/01/06/happy-anniversary/</guid> <link>http://www.kryogenix.org/days/2018/01/06/happy-anniversary/</link> <description><p>You’re supposed to send cards to wish someone a happy anniversary. Well, today, my mum and dad have been married for 45 years (!), so I sent them some cards. Specifically, five playing cards, with weird symbols on them.</p><p><img alt="Joker, J♠, A♥, A♠, 5♠" src="https://kryogenix.org/images/ha45-1.png" /></p><p>So, the first question is: what order should they be in? You might need to be Irish to get this next bit.</p><p>There is a card game in Ireland called Forty-Five. It’s basically Whist, or Trumps; you each play a card, and highest card wins, except that a trump card beats a non-trump. My grandad, my mum’s dad, was an absolute demon at it. You’d sit and play a few hands and then he’d say: you reneged! And you’d say, I did what? And he’d say: you should have played your Jack of Spades there. And you’d say: how the bloody hell do you know I have the Jack of Spades? And then he’d beat you nine hundred games to nil.</p><p>Anyway, what makes Forty-Five not be Whist is that the trumps are in a weird order. Imagine that, in this hand, trump suit has been chosen as Spades. The highest trump, the best card in the pack, is the Five of Spades. Then the Jack of Spades, then the Joker, then the Ace of Hearts (<em>regardless</em> of which suit is trump; always the A♥ as fourth trump), then the Ace of Spades and down the other trump suit cards in sequence (K♠, Q♠, etc).</p><p>And it is their forty-fifth wedding anniversary. (See what I did there?) So if we put the cards in order:</p><p><img alt="5♠, J♠, Joker, A♥, A♠" src="https://kryogenix.org/images/ha45-2.png" /></p><p>then that’s correct. But what about the weird symbols? Well, once you’ve got the cards laid out in order as above, you can look at them from the right-hand-side and the symbols spell a vertical message:</p><p><img alt="Weird symbols spell out 'HAPPY ANNIVERSARY'" src="https://kryogenix.org/images/ha45-3.png" /></p><p><span class="caps">HAPPY</span> <span class="caps">ANNIVERSARY</span>.</p><p>Also, I’m forty-one, so all you people who have suggested that my parents were unmarried (although by using a shorter word for it) are wrong.</p><p>Happy anniversary, mum and dad.</p></description> <pubDate>Sat, 06 Jan 2018 12:42:00 +0000</pubDate></item><item> <title>Raphaël Hertzog: My Free Software Activities in December 2017</title> <guid isPermaLink="false">https://raphaelhertzog.com/?p=3662</guid> <link>https://raphaelhertzog.com/2018/01/06/my-free-software-activities-in-december-2017/</link> <description><p><img alt="" class="alignleft size-medium wp-image-2728" height="300" src="https://raphaelhertzog.com/files/2012/07/activity-report-300x300.jpg" title="Activity report" width="300" />My monthly report covers a large part of what I have been doing in the free software world. I write it for <a href="https://raphaelhertzog.com/go/donate/">my donors</a> (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.</p><h3>Debian LTS</h3><p>This month I was allocated 12h and I had two hours left but I only spent 13h. During this time, I managed the LTS frontdesk during one week, reviewing new security issues and classifying the associated CVE (18 commits to the security tracker).</p><p>I also released <a href="https://lists.debian.org/debian-lts-announce/2017/12/msg00007.html">DLA-1205-1</a> on simplesamlphp fixing 6 CVE. I prepared and released <a href="https://lists.debian.org/debian-lts-announce/2017/12/msg00010.html">DLA-1207-1</a> on erlang with the help of the maintainer who tested the patch that I backported. I handled tkabber but it turned out that the CVE report was wrong, I reported this to MITRE who marked the CVE as DISPUTED (see <a href="https://security-tracker.debian.org/tracker/CVE-2017-17533">CVE-2017-17533</a>).</p><p>During my CVE triaging work, I decided to mark mp3gain and libnet-ping-external-perl as unsupported (the latter has been removed everywhere already). I re-classified the suricata CVE as not worth an update (following the decision of the security team). I also dropped global from dla-needed as the issue was marked unimportant but I still filed #884912 about it so that it gets tracked in the BTS.</p><p>I filed <a href="https://bugs.debian.org/884911">#884911</a> on ohcount requesting new upstream (fixing CVE) and update of homepage field (that is misleading in current package). I dropped jasperreports from dla-needed.txt as issues are undetermined and upstream is uncooperative, instead I suggested to mark the package as unsupported (see <a href="https://bugs.debian.org/884907">#884907</a>).</p><h3>Misc Debian Work</h3><p><strong>Debian Installer</strong>. I <a href="https://lists.debian.org/debian-boot/2017/12/msg00034.html">suggested to switch to isenkram</a> instead of discover for automatic package installation based on recognized hardware. I also filed a bug on isenkram (<a href="https://bugs.debian.org/883470">#883470</a>) and <a href="https://lists.debian.org/debian-cloud/2017/12/msg00018.html">asked debian-cloud for help</a> to complete the missing mappings.</p><p><strong>Packaging</strong>. I sponsored asciidoc 8.6.10-2 for Joseph Herlant. I uplodaded new versions of live-tools and live-build fixing multiple bugs that had been reported (many with patches ready to merge). Only #882769 required a bit more work to track down and fix. I also uploaded dh-linktree 0.5 with a new feature contributed by Paul Gevers. By the way, I no longer use this package so I will happily give it over to anyone who needs it.</p><p><strong>QA team</strong>. When I got my account on salsa.debian.org (a bit before the announce of the beta phase), I created the group for the <a href="https://salsa.debian.org/qa">QA team</a> and setup a project for <a href="https://salsa.debian.org/qa/distro-tracker">distro-tracker</a>.</p><p><strong>Bug reports</strong>. I filed <a href="https://bugs.debian.org/884713">#884713</a> on approx, requesting that systemd’s approx.socket be configured to not have any trigger limit.</p><h3>Package Tracker</h3><p>Following the switch to Python 3 by default, I updated the packaging provided in the git repository. I’m now also providing a systemd unit to run gunicorn3 for the website.</p><p>I merged multiple patches of Pierre-Elliott Bécue fixing bugs and adding a new feature (vcswatch support!). I fixed a bug related to the lack of a link to the experimental build logs and a bit of bug triaging.</p><p>I also filed two bugs against DAK related to bad interactions with the package tracker: <a href="https://bugs.debian.org/884930">#884930</a> because it does still use packages.qa.debian.org to send emails instead of tracker.debian.org. And <a href="https://bugs.debian.org/884931">#884931</a> because it sends removal mails to too many email addresses. And I filed a bug against the tracker (<a href="https://bugs.debian.org/884933">#884933</a>) because the last issue also revealed a problem in the way the tracker handles removal mails.</p><h3>Thanks</h3><p>See you next month for a new summary of my activities.</p><p style="font-size: smaller;"><a href="https://raphaelhertzog.com/2018/01/06/my-free-software-activities-in-december-2017/#comments">No comment</a> | Liked this article? <a href="http://raphaelhertzog.com/support-my-work/">Click here</a>. | My blog is <a href="http://flattr.com/thing/26545/apt-get-install-debian-wizard">Flattr-enabled</a>.</p></description> <pubDate>Sat, 06 Jan 2018 10:50:13 +0000</pubDate></item><item> <title>Lubuntu Blog: Lubuntu 17.04 End Of Life and Lubuntu 17.10 Respins</title> <guid isPermaLink="false">http://lubuntu.me/?p=2721</guid> <link>http://lubuntu.me/lubuntu-17-04-eol-and-lubuntu-17-10-respins/</link> <description>Lubuntu 17.04 Reaches End of Life on Saturday, January 13, 2018 Following the End of Life notice for Ubuntu, the Lubuntu Team would like to announce that as a non-LTS release, 17.04 has a 9-month support cycle and, as such, will reach end of life on Saturday, January 13, 2018. Lubuntu will no longer provide […]</description> <pubDate>Sat, 06 Jan 2018 00:06:46 +0000</pubDate></item><item> <title>Ubuntu Insights: Announcing the Dell XPS 13 Developer Edition 9370 with Ubuntu</title> <guid isPermaLink="false">https://insights.ubuntu.com/?p=82521</guid> <link>https://insights.ubuntu.com/2018/01/05/announcing-the-dell-xps-13-developer-edition-9370-with-ubuntu/</link> <description><p>We’re excited to see Dell announce the availability of the 7th gen XPS 13 Developer Edition (9370) which comes preloaded with Ubuntu. Canonical have been part of Dell’s Project Sputnik project since Day 1, and five years later we are delighted to see it continue. In fact, our VP of Product Dustin Kirkland was one of the three original developers (or cosmonauts) who provided input into this project and has left some thoughts five years later <a href="http://blog.dustinkirkland.com/2018/01/dell-xps-13-with-ubuntu-ultimate.html">in his blog</a>.</p><p>This model joins the family of <a href="http://www.dell.com/learn/us/en/555/campaigns/xps-linux-laptop_us">Dell systems that come preinstalled with Ubuntu</a> including the XPS 13 Developer Edition 9360 and the Dell Precision line. This broad line provides a wide breadth of configurations, putting Ubuntu in the hands of as many developers and enthusiasts as possible. Canonical have worked extensively with the team at Dell to ensure that users and developers get a first class Ubuntu experience out of the box. Our engineers and developers have been using the XPS laptops extensively, and we can’t wait for the latest generation! Here is a runthrough of some of the best features.</p><p><img alt="" class="alignleft size-full wp-image-82522" height="1084" src="https://insights.ubuntu.com/wp-content/uploads/7a88/XPS-Transparent-Version-Cropped.png" width="2000" /></p><h2>A brand new design</h2><p>The Dell XPS 13 uses an all new chassis and continues to be the smallest 13 inch laptop in the world. It’s 30% thinner at 3.4mm as well as lighter at 2.7 pounds! You won’t believe how much power is packed into an Ubuntu laptop this small!</p><p><img alt="" class="alignleft size-full wp-image-82523" height="795" src="https://insights.ubuntu.com/wp-content/uploads/23ef/XPS13-V2.jpg" width="1280" /></p><h2>A new viewing experience</h2><p>With a smaller design comes even smaller bezels! The XPS 13 has reduced the frame around the InfinityEdge display, yet packs in even more pixels than ever. This machine is now also available with an incredible 4K display.</p><h2>Uncompromised Power</h2><p>Even with all the major cosmetic changes, Dell have still ensured that this is the most powerful 13 inch laptop in its class. With an 8th generation Intel Quad Core Processor, it’s now twice as fast as the XPS 13 launched in 2015! For developers, four cores in a 13 inch laptop makes it easier to deploy DevOps solutions on Ubuntu with <a href="https://conjure-up.io/">Conjure-up</a> such as <a href="https://www.ubuntu.com/kubernetes">Canonical’s Distribution of Kubernetes</a> and OpenStack. Between even faster SSDs, Thunderbolt 3, better processors, and up to 16GB of RAM; the XPS 13 Developer Edition will be a pint-sized powerhouse!</p><h2>Further Information</h2><p>The XPS 13 is now available in the <a href="http://www.dell.com/en-us/work/shop/laptops-notebooks/xps-13-9370-laptop/spd/xps-13-9370-laptop?~ck=bt">US &amp; Canada</a>, <a href="http://www.dell.com/en-uk/work/shop/laptops/xps-13-laptop/spd/xps-13-9370-laptop">UK</a>, Ireland, Germany, Austria, France, Italy, Spain, Switzerland (French and German), Belgium, Netherlands, Sweden, Norway, &amp; Denmark. For more information on Project Sputnik and upcoming offline availability,<a href="https://bartongeorge.io/2018/01/04/xps-13-developer-edition-the-7th-gen-is-here/"> check out this blog from the senior architect behind the XPS 13</a>.</p></description> <pubDate>Fri, 05 Jan 2018 22:06:35 +0000</pubDate></item><item> <title>Dustin Kirkland: Ubuntu Updates for the Meltdown / Spectre Vulnerabilities</title> <guid isPermaLink="false">tag:blogger.com,1999:blog-3822757291061444396.post-5947286724666298793</guid> <link>http://blog.dustinkirkland.com/2018/01/ubuntu-updates-for-meltdown-spectre.html</link> <description><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-F0GULSAo_SQ/Wk50-UrLDPI/AAAAAAAITik/EzbN2jBpCfcs4Y1IF_cqXzec241MAzkcgCLcBGAs/s1600/Screenshot%2Bfrom%2B2018-01-04%2B12-39-25.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="386" data-original-width="521" height="237" src="https://3.bp.blogspot.com/-F0GULSAo_SQ/Wk50-UrLDPI/AAAAAAAITik/EzbN2jBpCfcs4Y1IF_cqXzec241MAzkcgCLcBGAs/s320/Screenshot%2Bfrom%2B2018-01-04%2B12-39-25.png" width="320" /></a></div><i></i><br /><div style="text-align: center;"><i><i>For up-to-date patch, package, and USN links, please refer to: <a href="https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown">https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown</a><br /><br />This is cross-posted on Canonical's official Ubuntu Insights blog:<br /><a href="https://insights.ubuntu.com/2018/01/04/ubuntu-updates-for-the-meltdown-spectre-vulnerabilities/">https://insights.ubuntu.com/2018/01/04/ubuntu-updates-for-the-meltdown-spectre-vulnerabilities/</a></i></i></div><i></i><br /><div style="text-align: center;"><br /></div><div>Unfortunately, you’ve probably already read about one of the most widespread security issues in modern computing history -- colloquially known as “<a href="https://en.wikipedia.org/wiki/Meltdown_(security_vulnerability)">Meltdown</a>” (<a href="https://people.canonical.com/~ubuntu-security/cve/2017/CVE-2017-5754.html">CVE-2017-5754</a>) and “<a href="https://en.wikipedia.org/wiki/Spectre_(security_vulnerability)">Spectre</a>” (<a href="https://people.canonical.com/~ubuntu-security/cve/2017/CVE-2017-5753.html">CVE-2017-5753</a> and <a href="https://people.canonical.com/~ubuntu-security/cve/2017/CVE-2017-5715.html">CVE-2017-5715</a>) -- affecting practically every computer built in the last 10 years, running any operating system. That includes <a href="https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown">Ubuntu</a>.<br /><br />I say “unfortunately”, in part because there was a coordinated release date of January 9, 2018, agreed upon by essentially every operating system, hardware, and cloud vendor in the world. By design, operating system updates would be available at the same time as the public disclosure of the security vulnerability. While it happens rarely, this an industry standard best practice, which has broken down in this case.<br /><br />At its heart, this vulnerability is a CPU hardware architecture design issue. But there are billions of affected hardware devices, and replacing CPUs is simply unreasonable. As a result, operating system kernels -- Windows, MacOS, Linux, and many others -- are being patched to mitigate the critical security vulnerability.<br /><br />Canonical engineers have been working on this since we were made aware under the embargoed disclosure (November 2017) and have worked through the Christmas and New Years holidays, testing and integrating an incredibly complex patch set into a broad set of Ubuntu kernels and CPU architectures.<br /><br />Ubuntu users of the 64-bit x86 architecture (aka, amd64) can expect updated kernels by the original January 9, 2018 coordinated release date, and sooner if possible. Updates will be available for:<br /><br /><ul><li>Ubuntu 17.10 (Artful) -- Linux 4.13 HWE</li><li>Ubuntu 16.04 LTS (Xenial) -- Linux 4.4 (and 4.4 HWE)</li><li>Ubuntu 14.04 LTS (Trusty) -- Linux 3.13</li><li>Ubuntu 12.04 ESM** (Precise) -- Linux 3.2</li><ul><li>Note that an <a href="https://www.ubuntu.com/support/esm">Ubuntu Advantage license</a> is required for the 12.04 ESM kernel update, as Ubuntu 12.04 LTS is past its end-of-life</li></ul></ul><div>Ubuntu 18.04 LTS (Bionic) will release in April of 2018, and will ship a 4.15 kernel, which includes the <a href="https://lwn.net/Articles/742404/">KPTI</a> patchset as integrated upstream.<br /><br />Ubuntu optimized kernels for the Amazon, Google, and Microsoft public clouds are also covered by these updates, as well as the rest of Canonical's <a href="https://partners.ubuntu.com/programmes/public-cloud">Certified Public Clouds</a> including Oracle, OVH, Rackspace, IBM Cloud, Joyent, and Dimension Data.<br /><br />These kernel fixes will not be <a href="https://www.ubuntu.com/server/livepatch">Livepatch-able</a>. The source code changes required to address this problem is comprised of hundreds of independent patches, touching hundreds of files and thousands of lines of code. The sheer complexity of this patchset is not compatible with the Linux kernel Livepatch mechanism. An update and a reboot will be required to active this update.<br /><br />Furthermore, you can expect Ubuntu security updates for a number of other related packages, including CPU microcode, GCC and QEMU in the coming days.<br /><br />We don't have a performance analysis to share at this time, but please do stay tuned here as we'll followup with that as soon as possible.<br /><br />Thanks,<br /><a href="https://twitter.com/dustinkirkland">@DustinKirkland</a><br />VP of Product<br />Canonical / Ubuntu</div></div></description> <pubDate>Fri, 05 Jan 2018 15:20:30 +0000</pubDate> <author>[email protected] (Dustin Kirkland)</author></item><item> <title>Dustin Kirkland: Dell XPS 13 with Ubuntu -- The Ultimate Developer Laptop of 2018!</title> <guid isPermaLink="false">tag:blogger.com,1999:blog-3822757291061444396.post-2302561658805753883</guid> <link>http://blog.dustinkirkland.com/2018/01/dell-xps-13-with-ubuntu-ultimate.html</link> <description><div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-JzvsRT30NyE/Wk6qAqNrrjI/AAAAAAAITjk/J-xhP4bzF9sJb4fE1Dv-HImE04sDmHz1gCLcBGAs/s1600/xps13.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="626" data-original-width="1155" height="216" src="https://1.bp.blogspot.com/-JzvsRT30NyE/Wk6qAqNrrjI/AAAAAAAITjk/J-xhP4bzF9sJb4fE1Dv-HImE04sDmHz1gCLcBGAs/s400/xps13.png" width="400" /></a></div><br />I'm the proud owner of a new Dell XPS 13 Developer Edition (<a href="http://www.dell.com/en-us/shop/dell-laptops/xps-13-laptop/spd/xps-13-9360-laptop">9630</a>) laptop, pre-loaded from the Dell factory with Ubuntu 16.04 LTS Desktop.<br /><br />Kudos to the <a href="http://www.dell.com/learn/us/en/555/campaigns/xps-linux-laptop_us">Dell</a> and the <a href="https://certification.ubuntu.com/certification/make/Dell/">Canonical</a> teams that have engineered a truly remarkable developer desktop experience. You should also <a href="https://bartongeorge.io/2018/01/04/xps-13-developer-edition-the-7th-gen-is-here/">check out the post from Dell's senior architect behind the XPS 13</a>, Barton George.</div><div><br />As it happens, I'm also the proud owner of a long loved, heavily used, 1st Generation Dell XPS 13 Developer Edition laptop :-) See <a href="http://blog.dustinkirkland.com/2012/05/project-sputnik-developer-focused-dell.html">this post from May 7, 2012</a>. You'll be happy to know that machine is still going strong. It's now my wife's daily driver. And I use it almost every day, for any and all hacking that I do from the couch, after hours, after I leave the office ;-)<br /><br />Now, this latest XPS edition is a real dream of a machine!<br /><br />From a hardware perspective, this newer XPS 13 sports an Intel i7-7660U 2.5GHz processor and 16GB of memory. While that's mildly exciting to me (as I've long used i7's and 16GB), here's what I am excited about...<br /><br />The 500GB NVME storage and a whopping 1239 MB/sec I/O throughput!<br /><br /><pre>[email protected]:~$ sudo hdparm -tT /dev/nvme0n1<br />/dev/nvme0n1:<br /> Timing cached reads: 25230 MB in 2.00 seconds = 12627.16 MB/sec<br /> Timing buffered disk reads: 3718 MB in 3.00 seconds = 1239.08 MB/sec<br /></pre><br />And on top of that, this is my first <a href="https://en.wikipedia.org/wiki/Graphics_display_resolution#QHD+_(3200%C3%971800)">QHD+</a> touch screen laptop display, sporting a magnificent 3200x1800 resolution. The graphics are nothing short of spectacular. Here's nearly 4K of <i><a href="http://blog.dustinkirkland.com/2014/12/hollywood-technodrama.html">Hollywood</a></i> hard "at work" :-)<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-SdcWEGbWwX0/Wk6qesvf1dI/AAAAAAAITjw/ucOzdSodswgH4K3TSLes6AmZAsEOpsTMACLcBGAs/s1600/Screenshot%2Bfrom%2B2018-01-04%2B16-17-59.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://3.bp.blogspot.com/-SdcWEGbWwX0/Wk6qesvf1dI/AAAAAAAITjw/ucOzdSodswgH4K3TSLes6AmZAsEOpsTMACLcBGAs/s400/Screenshot%2Bfrom%2B2018-01-04%2B16-17-59.png" width="400" /></a></div><br />The keyboard is super comfortable. I like it a bit better than the 1st generation. Unlike your Apple friends, we still have our F-keys, which is important to me as a Byobu user :-) The placement of the PgUp, PgDn, Home, and End keys (as Fn + Up/Down/Left/Right) takes a while to get used to.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-fcdUrDTxMq4/Wk6yUmhXhVI/AAAAAAAITmg/hC01mi_duPMVaMbHyZKCydzXrUXu4WKUwCLcBGAs/s1600/20180104_165838.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="688" data-original-width="1600" height="171" src="https://3.bp.blogspot.com/-fcdUrDTxMq4/Wk6yUmhXhVI/AAAAAAAITmg/hC01mi_duPMVaMbHyZKCydzXrUXu4WKUwCLcBGAs/s400/20180104_165838.jpg" width="400" /></a></div><br />The speakers are decent for a laptop, and the microphone is excellent. The webcam is placed in an odd location (lower left of the screen), but it has quite nice resolution and focus quality.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-lB0KecsOiCM/Wk6zQ9uoCQI/AAAAAAAITmw/H-DJBjg2fFQJqM92XW-4g3vQOBnwir6jQCLcBGAs/s1600/2018-01-04-170412.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="1280" height="225" src="https://2.bp.blogspot.com/-lB0KecsOiCM/Wk6zQ9uoCQI/AAAAAAAITmw/H-DJBjg2fFQJqM92XW-4g3vQOBnwir6jQCLcBGAs/s400/2018-01-04-170412.jpg" width="400" /></a></div><br />And Bluetooth and WiFi, well, they "just work". I got 98.2 Mbits/sec of throughput over WiFi.<br /><br /><pre>[email protected]:~$ iperf -c 10.0.0.45<br />------------------------------------------------------------<br />Client connecting to 10.0.0.45, TCP port 5001<br />TCP window size: 85.0 KByte (default)<br />------------------------------------------------------------<br />[ 3] local 10.0.0.149 port 40568 connected with 10.0.0.45 port 5001<br />[ ID] Interval Transfer Bandwidth<br />[ 3] 0.0-10.1 sec 118 MBytes 98.2 Mbits/sec<br /></pre><br />There's no external display port, so you'll need <a href="https://www.amazon.com/dp/B075FKL7MC/_encoding=UTF8?coliid=I35967QHHGK3AN&amp;colid=2178RHU6O6G82&amp;psc=1">something like this USB-C-to-HDMI adapter</a> to project to a TV or monitor.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://www.amazon.com/dp/B075FKL7MC/_encoding=UTF8?coliid=I35967QHHGK3AN&amp;colid=2178RHU6O6G82&amp;psc=1"><img border="0" data-original-height="470" data-original-width="565" height="266" src="https://4.bp.blogspot.com/-O3PAJyI-BAk/Wk6xJhIQ_WI/AAAAAAAITls/XHt5Xe5MjUM9791V1-tay-_EjkV-IIpHwCLcBGAs/s320/Screenshot%2Bfrom%2B2018-01-04%2B16-56-21.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div>There's 1x USB-C port, 2x USB-3 ports, and an SD-Card reader.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-ok1Ee32sgJs/Wk6wQTuQR7I/AAAAAAAITlQ/ck1IP4cyx2o1LyfkXVMFwRH8g5CRrhuAQCLcBGAs/s1600/20180104_164944.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="387" data-original-width="1600" height="96" src="https://1.bp.blogspot.com/-ok1Ee32sgJs/Wk6wQTuQR7I/AAAAAAAITlQ/ck1IP4cyx2o1LyfkXVMFwRH8g5CRrhuAQCLcBGAs/s400/20180104_164944.jpg" width="400" /></a></div><br />One of the USB-3 ports can be used to charge your phone or other devices, even while your laptop is suspended. I use this all the time, to keep my phone topped up while I'm aboard planes, trains, and cars. To do so, you'll need to enable "USB PowerShare" in the BIOS. Here's <a href="https://www.dell.com/support/article/us/en/04/sln155147/usb-powershare-feature-on-dell-laptops?lang=en">an article from Dell's KnowledgeBase</a> explaining how.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-UamdeNkPcK4/Wk6wDy01iEI/AAAAAAAITlI/E9K5FycJx4kwM4sp6ZPnCL4l2eC1Ccs3ACLcBGAs/s1600/20180104_164824.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="486" data-original-width="1600" height="121" src="https://1.bp.blogspot.com/-UamdeNkPcK4/Wk6wDy01iEI/AAAAAAAITlI/E9K5FycJx4kwM4sp6ZPnCL4l2eC1Ccs3ACLcBGAs/s400/20180104_164824.jpg" width="400" /></a></div><br />Honestly, I have only one complaint... And that's that there is no <a href="https://en.wikipedia.org/wiki/Pointing_stick">Trackstick</a> mouse (which is available on some Dell models). I'm not a huge fan of the Touchpad. It's too sensitive, and my palms are always touching it inadvertently. So I need to use an external mouse to be effective. I'll continue to provide this feedback to the Dell team, in the hopes that one day I'll have my perfect developer laptop! Otherwise, this machine is a beauty. I'm sure you'll love it too.<br /><br />Cheers,<br />Dustin</div></description> <pubDate>Fri, 05 Jan 2018 15:12:46 +0000</pubDate> <author>[email protected] (Dustin Kirkland)</author></item><item> <title>Kees Cook: SMEP emulation in PTI</title> <guid isPermaLink="false">https://outflux.net/blog/?p=1093</guid> <link>https://outflux.net/blog/archives/2018/01/04/smep-emulation-in-pti/</link> <description><p>An nice additional benefit of the recent <a href="https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/commit/?h=x86/pti&amp;id=385ce0ea4c078517fa51c261882c4e72fba53005">Kernel Page Table Isolation</a> (<code>CONFIG_PAGE_TABLE_ISOLATION</code>) patches (to defend against <a href="https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html">CVE-2017-5754</a>, the speculative execution “rogue data cache load” or “Meltdown” flaw) is that the userspace page tables visible while running in kernel mode <a href="https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/commit/?h=x86/pti&amp;id=1c4de1ff4fe50453b968579ee86fac3da80dd783">lack the executable bit</a>. As a result, systems without the SMEP CPU feature (before Ivy-Bridge) get it emulated for “free”.</p><p>Here’s a non-SMEP system with PTI disabled (booted with “<code>pti=off</code>“), running the <code>EXEC_USERSPACE</code> LKDTM test:</p><blockquote><pre># grep smep /proc/cpuinfo# dmesg -c | grep isolation[ 0.000000] Kernel/User page tables isolation: disabled on command line.# cat &lt;(echo EXEC_USERSPACE) &gt; /sys/kernel/debug/provoke-crash/DIRECT# dmesg[ 17.883754] lkdtm: Performing direct entry EXEC_USERSPACE[ 17.885149] lkdtm: attempting ok execution at ffffffff9f6293a0[ 17.886350] lkdtm: attempting bad execution at 00007f6a2f84d000</pre></blockquote><p>No crash! The kernel was happily executing userspace memory.</p><p>But with PTI enabled:</p><blockquote><pre># grep smep /proc/cpuinfo# dmesg -c | grep isolation[ 0.000000] Kernel/User page tables isolation: enabled# cat &lt;(echo EXEC_USERSPACE) &gt; /sys/kernel/debug/provoke-crash/DIRECTKilled# dmesg[ 33.657695] lkdtm: Performing direct entry EXEC_USERSPACE[ 33.658800] lkdtm: attempting ok execution at ffffffff926293a0[ 33.660110] lkdtm: attempting bad execution at 00007f7c64546000[ 33.661301] BUG: unable to handle kernel paging request at 00007f7c64546000[ 33.662554] IP: 0x7f7c64546000...</pre></blockquote><p>It should only take a little more work to leave the userspace page tables entirely unmapped while in kernel mode, and only map them in during <code>copy_to_user()</code>/<code>copy_from_user()</code> as ARM already does with <code>ARM64_SW_TTBR0_PAN</code> (or <code>CONFIG_CPU_SW_DOMAIN_PAN</code> on arm32).</p><p style="clear: both; text-align: left;">© 2018, <a href="https://outflux.net/blog/">Kees Cook</a>. This work is licensed under a <a href="http://creativecommons.org/licenses/by-sa/3.0/us/" rel="license">Creative Commons Attribution-ShareAlike 3.0 License</a>.<br /><a href="http://creativecommons.org/licenses/by-sa/3.0/us/" rel="license"><img alt="Creative Commons License" src="http://outflux.net/illustrations/cc-88x31.png" style="border-width: 0;" /></a> </p></description> <pubDate>Thu, 04 Jan 2018 21:43:41 +0000</pubDate></item><item> <title>Ubuntu Insights: Dustin Kirkland: Ubuntu Updates for the Meltdown / Spectre Vulnerabilities</title> <guid isPermaLink="false">http://blog.dustinkirkland.com/2018/01/ubuntu-updates-for-meltdown-spectre.html</guid> <link>https://insights.ubuntu.com/2018/01/04/dustin-kirkland-ubuntu-updates-for-the-meltdown-spectre-vulnerabilities/</link> <description><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-F0GULSAo_SQ/Wk50-UrLDPI/AAAAAAAITik/EzbN2jBpCfcs4Y1IF_cqXzec241MAzkcgCLcBGAs/s1600/Screenshot%2Bfrom%2B2018-01-04%2B12-39-25.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="237" src="https://3.bp.blogspot.com/-F0GULSAo_SQ/Wk50-UrLDPI/AAAAAAAITik/EzbN2jBpCfcs4Y1IF_cqXzec241MAzkcgCLcBGAs/s320/Screenshot%2Bfrom%2B2018-01-04%2B12-39-25.png" width="320" /></a></div> <div style="text-align: center;"><i><i>For up-to-date patch, package, and USN links, please refer to: <a href="https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown">https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown</a></i></i></div> <div style="text-align: center;"></div><div> Unfortunately, you’ve probably already read about one of the most widespread security issues in modern computing history -- colloquially known as “<a href="https://en.wikipedia.org/wiki/Meltdown_(security_vulnerability)">Meltdown</a>” (<a href="https://people.canonical.com/~ubuntu-security/cve/2017/CVE-2017-5754.html">CVE-2017-5754</a>) and “<a href="https://en.wikipedia.org/wiki/Spectre_(security_vulnerability)">Spectre</a>” (<a href="https://people.canonical.com/~ubuntu-security/cve/2017/CVE-2017-5753.html">CVE-2017-5753</a> and <a href="https://people.canonical.com/~ubuntu-security/cve/2017/CVE-2017-5715.html">CVE-2017-5715</a>) -- affecting practically every computer built in the last 10 years, running any operating system. That includes <a href="https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown">Ubuntu</a>. I say “unfortunately”, in part because there was a coordinated release date of January 9, 2018, agreed upon by essentially every operating system, hardware, and cloud vendor in the world. By design, operating system updates would be available at the same time as the public disclosure of the security vulnerability. While it happens rarely, this an industry standard best practice, which has broken down in this case. At its heart, this vulnerability is a CPU hardware architecture design issue. But there are billions of affected hardware devices, and replacing CPUs is simply unreasonable. As a result, operating system kernels -- Windows, MacOS, Linux, and many others -- are being patched to mitigate the critical security vulnerability. Canonical engineers have been working on this since we were made aware under the embargoed disclosure (November 2017) and have worked through the Christmas and New Years holidays, testing and integrating an incredibly complex patch set into a broad set of Ubuntu kernels and CPU architectures. Ubuntu users of the 64-bit x86 architecture (aka, amd64) can expect updated kernels by the original January 9, 2018 coordinated release date, and sooner if possible. Updates will be available for:<ul> <li>Ubuntu 17.10 (Artful) -- Linux 4.13 HWE</li> <li>Ubuntu 16.04 LTS (Xenial) -- Linux 4.4 (and 4.4 HWE)</li> <li>Ubuntu 14.04 LTS (Trusty) -- Linux 3.13</li> <li>Ubuntu 12.04 ESM** (Precise) -- Linux 3.2<ul> <li>Note that an <a href="https://www.ubuntu.com/support/esm">Ubuntu Advantage license</a> is required for the 12.04 ESM kernel update, as Ubuntu 12.04 LTS is past its end-of-life</li></ul></li></ul><div> Ubuntu 18.04 LTS (Bionic) will release in April of 2018, and will ship a 4.15 kernel, which includes the <a href="https://lwn.net/Articles/742404/">KPTI</a> patchset as integrated upstream. Ubuntu optimized kernels for the Amazon, Google, and Microsoft public clouds are also covered by these updates, as well as the rest of Canonical's <a href="https://partners.ubuntu.com/programmes/public-cloud">Certified Public Clouds</a> including Oracle, OVH, Rackspace, IBM Cloud, Joyent, and Dimension Data. <em>Important note: There are several other public clouds not listed here, which modify the Ubuntu image and/or Linux kernel, and your Ubuntu security experience there is compromised.</em> These kernel fixes will not be <a href="https://www.ubuntu.com/server/livepatch">Livepatch-able</a>. The source code changes required to address this problem is comprised of hundreds of independent patches, touching hundreds of files and thousands of lines of code. The sheer complexity of this patchset is not compatible with the Linux kernel Livepatch mechanism. An update and a reboot will be required to active this update. Furthermore, you can expect Ubuntu security updates for a number of other related packages, including CPU microcode, GCC and QEMU in the coming days. Thanks,<a href="https://twitter.com/dustinkirkland">@DustinKirkland</a>VP of ProductCanonical / Ubuntu </div></div></description> <pubDate>Thu, 04 Jan 2018 21:41:23 +0000</pubDate></item><item> <title>Kubuntu General News: Plasma 5.11.5 bugfix release available in backports PPA for Artful Aardvark 17.10</title> <guid isPermaLink="false">https://kubuntu.org/?p=3668</guid> <link>https://kubuntu.org/news/plasma-5-11-5-bugfix-release-available-in-backports-ppa-for-artful-aardvark-17-10/</link> <description><p>The <a href="https://www.kde.org/announcements/plasma-5.11.5.php">5th and final bugfix update (5.11.5)</a> of the <a href="https://www.kde.org/announcements/plasma-5.11.0.php">Plasma 5.11</a> series is now available for users of Kubuntu Artful Aardvark 17.10 to install via our Backports PPA.</p><p>This update also includes an upgrade of <a href="https://www.kde.org/announcements/kde-frameworks-5.41.0.php">KDE Frameworks to version 5.41</a>.</p><p>To update, add the following repository to your software sources list:</p><p><code>ppa:kubuntu-ppa/backports</code></p><p>or if it is already added, the updates should become available via your preferred update method.</p><p>The PPA can be added manually in the Konsole terminal with the command:</p><p><code>sudo add-apt-repository ppa:kubuntu-ppa/backports</code></p><p>and packages then updated with</p><p><code>sudo apt update</code><br /><code>sudo apt full-upgrade</code></p><p>Upgrade notes:</p><p>~ The Kubuntu backports PPA includes various other backported applications, so please be aware that enabling the backports PPA for the first time and doing a full upgrade will result in a substantial amount of upgraded packages in addition to Plasma 5.11.5.</p><p>~ The PPA may also continue to receive updates to Plasma when they become available, and further updated applications where practical.</p><p>~ While we believe that these packages represent a beneficial and stable update, please bear in mind that they have not been tested as comprehensively as those in the main Ubuntu archive, and are supported only on a limited and informal basis. Should any issues occur, please provide feedback on our mailing list [1], IRC [2], and/or file a bug against our PPA packages [3].</p><p>1. Kubuntu-devel mailing list:<a href="https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel"> https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel</a><br />2. Kubuntu IRC channels: #kubuntu &amp; #kubuntu-devel on irc.freenode.net<br />3. Kubuntu PPA bugs: <a href="https://bugs.launchpad.net/kubuntu-ppa">https://bugs.launchpad.net/kubuntu-ppa</a></p></description> <pubDate>Thu, 04 Jan 2018 13:28:52 +0000</pubDate></item><item> <title>Simos Xenitellis: How to preconfigure LXD containers with cloud-init</title> <guid isPermaLink="false">https://blog.simos.info/?p=2075</guid> <link>https://blog.simos.info/how-to-preconfigure-lxd-containers-with-cloud-init/</link> <description>You are creating containers and you want them to be somewhat preconfigured. For example, you want them to run automatically apt update as soon as they are launched. Or, get some packages pre-installed, or run a few commands. Here is how to perform this early initialization with cloud-init through LXD to container images that support … <p></p><p><a class="more-link btn" href="https://blog.simos.info/how-to-preconfigure-lxd-containers-with-cloud-init/">Continue reading</a></p></description> <pubDate>Wed, 03 Jan 2018 15:39:26 +0000</pubDate></item><item> <title>Sean Davis: smdavis.us Is Now bluesabre.org!</title> <guid isPermaLink="false">https://bluesabre.org/?p=2439</guid> <link>https://bluesabre.org/2018/01/01/smdavis-us-is-now-bluesabre-org/</link> <description><p>So, you’ve clicked on a link or came to check for a new release at smdavis.us, and now you’re here at bluesabre.org. Fear not! Everything is working just as it should.</p><p>To kick off 2018, I’ve started tidying up my personal brand. Since my website has consistently been about FOSS updates, I’ve transitioned to a more fitting .org domain. The .org TLD is often associated with community and open source initiatives, and the content you’ll find here is always going to fit that bill. You can continue to expect a steady stream of Xfce and Xubuntu updates.</p><p>And that’s enough of that, let’s get started with the new year. 2018 is going to be one of the best yet!</p></description> <pubDate>Tue, 02 Jan 2018 03:16:22 +0000</pubDate></item><item> <title>Julian Andres Klode: A year ends, a new year begins</title> <guid isPermaLink="false">http://juliank.wordpress.com/?p=1522</guid> <link>https://juliank.wordpress.com/2018/01/01/a-year-ends-a-new-year-begins/</link> <description><p>2017 is ending. It’s been a rather uneventful year, I’d say. About 6 months ago I started working on my master’s thesis – it plays with adding linear types to Go – and I handed that in about 1.5 weeks ago. It’s not really complete, though – you cannot actually use it on a complete Go program. The source code is of course <a href="https://github.com/julian-klode/lingolang">available on GitHub</a>, it’s a bunch of Go code for the implementation and a bunch of Markdown and LaTex for the document. I’m happy about the code coverage, though: As a properly developed software project, it achieves about 96% code coverage – the missing parts happening at the end, when time ran out <img alt="😉" class="wp-smiley" src="https://s0.wp.com/wp-content/mu-plugins/wpcom-smileys/twemoji/2/72x72/1f609.png" style="height: 1em;" /></p><p>I released apt 1.5 this year, and started 1.6 with seccomp sandboxing for methods.</p><p>I went to DebConf17 in Montreal. I unfortunately did not make it to DebCamp, nor the first day, but I at least made the rest of the conference. There, I gave a talk about APT development in the past year, and had a few interesting discussions. One thing that directly resulted from such a discusssion was a new proposal for delta upgrades, with a very simple delta format based on a variant of bsdiff (with external compression, streamable patches, and constant memory use rather than linear). I hope we can implement this – the savings are enormous with practically no slowdown (there is no reconstruction phase, upgrades are streamed directly to the file system), which is especially relevant for people with slow or data capped connections.</p><p>This month, I’ve been buying a few “toys”: I got a pair of speakers (JBL LSR 305), and I got a noise cancelling headphone (a Sony WH-1000XM2). Nice stuff. Been wearing the headphones most of today, and they’re quite comfortable and really make things quite, except for their own noise <img alt="😉" class="wp-smiley" src="https://s0.wp.com/wp-content/mu-plugins/wpcom-smileys/twemoji/2/72x72/1f609.png" style="height: 1em;" /> Well, both the headphone and the speakers have a white noise issue, but oh well, the prices were good.</p><p>This time of the year is not only a time to look back at the past year, but also to look forward to the year ahead. In one week, I’ll be joining Canonical to work on Ubuntu foundation stuff. It’s going to be interesting. I’ll also be moving places shortly, having partially lived in student housing for 6 years (one room, and a shared kitchen), I’ll be moving to a complete apartement.</p><p>On the APT front, I plan to introduce a few interesting changes. One of them involves automatic removal of unused packages: This should be happening automatically during install, upgrade, and whatever. Maybe not for all packages, though – we might have a list of “safe” autoremovals. I’d also be interested in adding metadata for transitions: Like if libfoo1 replaces libfoo0, we can safely remove libfoo0 if nothing depends on it anymore. Maybe not for all “garbage” either. It might make sense to restrict it to new garbage – that is packages that become unused as part of the operation. This is important for safe handling of existing setups with automatically removable packages: We don’t suddenly want to remove them all when you run upgrade.</p><p>The other change is about sandboxing. You might have noticed that sometimes, sandboxing is disabled with a warning because the method would not be able access the source or the target. The goal is to open these files in the main program and send file descriptors to the methods via a socket. This way, we can avoid permission problems, and we can also make the sandbox stronger – for example, by not giving it access to the partial/ directory anymore.</p><p>Another change we need to work on is standardising the “Important” field, which is sort of like essential – it marks an installed package as extra-hard to remove (but unlike Essential, does not cause apt to install it automatically). The latest draft calls it “Protected”, but I don’t think we have a consensus on that yet.</p><p>I also need to get happy eyeballs done – fast fallback from IPv6 to IPv4. I had a completely working solution some months ago, but it did not pass CI, so I decided to start from scratch with a cleaner design to figure out if I went wrong somewhere. Testing this is kind of hard, as it basically requires a broken IPv6 setup (well, unreachable IPv6 servers).</p><p>Oh well, 2018 has begun, so I’m going to stop now. Let’s all do our best to make it awesome!</p><br />Filed under: <a href="https://juliank.wordpress.com/category/debian/">Debian</a>, <a href="https://juliank.wordpress.com/category/general/">General</a>, <a href="https://juliank.wordpress.com/category/ubuntu/">Ubuntu</a> <a href="http://feeds.wordpress.com/1.0/gocomments/juliank.wordpress.com/1522/" rel="nofollow"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/comments/juliank.wordpress.com/1522/" /></a> <img alt="" border="0" height="1" src="https://pixel.wp.com/b.gif?host=juliank.wordpress.com&amp;blog=2363947&amp;post=1522&amp;subd=juliank&amp;ref=&amp;feed=1" width="1" /></description> <pubDate>Sun, 31 Dec 2017 23:01:10 +0000</pubDate></item><item> <title>Eric Hammond: Streaming AWS DeepLens Video Over SSH</title> <guid isPermaLink="false">https://alestic.com/2017/12/aws-deeplens-video-stream-ssh/</guid> <link>http://feeds.alestic.com/~r/alestic-planetubuntu/~3/_9EyszVpF1M/</link> <description><p><em>instead of connecting to the DeepLens with HDMI micro cable, monitor, keyboard, mouse</em></p> <p>Credit for this excellent idea goes to <a href="https://forums.aws.amazon.com/thread.jspa?threadID=269057&amp;tstart=0">Ernie Kim</a>. Thank you!</p> <h2 id="instructions-without-ssh">Instructions without ssh</h2> <p>The standard AWS DeepLens <a href="https://docs.aws.amazon.com/deeplens/latest/dg/deeplens-viewing-output.html">instructions</a> recommendconnecting the device to a monitor, keyboard, and mouse. Theinstructions provide information on how to view the video streams inthis mode:</p> <p>If you are connected to the DeepLens using a monitor, you can view theunprocessed device stream (raw camera video before being processed bythe model) using this command on the DeepLens device:</p> <pre><code>mplayer –demuxer /opt/awscam/out/ch1_out.h264</code></pre> <p>If you are connected to the DeepLens using a monitor, you can view theproject stream (video after being processed by the model on theDeepLens) using this command on the DeepLens device:</p> <pre><code>mplayer –demuxer lavf -lavfdopts format=mjpeg:probesize=32 /tmp/ssd_results.mjpeg</code></pre> <h2 id="instructions-with-ssh">Instructions with ssh</h2> <p>You can also view the DeepLens video streams over ssh, without havinga monitor connected to the device. To make this possible, you need toenable ssh access on your DeepLens. This is available as a checkboxoption in the <a href="https://docs.aws.amazon.com/deeplens/latest/dg/deeplens-getting-started-set-up.html">initial setup</a> of the device. I’m <a href="https://forums.aws.amazon.com/thread.jspa?threadID=270388&amp;tstart=0">working toget instructions</a> on how to enable ssh access afterwards andwill update once this is available.</p> <p>To view the video streams over ssh, we take the same <code>mplayer</code> commandoptions above and the same source stream files, but send the streamover ssh, and feed the result to the stdin of an <code>mplayer</code> processrunning on the local system, presumably a laptop.</p> <p>All of the following commands are run on your local laptop (not on theDeepLens device).</p> <p>You need to know the IP address of your DeepLens device on your localnetwork:</p> <pre><code>ip_address=[IP ADDRESS OF DeepLens]</code></pre> <p>You will need to install the <code>mplayer</code> software on your locallaptop. This varies with your OS, but for Ubuntu:</p> <pre><code>sudo apt-get install mplayer</code></pre> <p>You can view the unprocessed device stream (raw camera video beforebeing processed by the model) over ssh using the command:</p> <pre><code>ssh [email protected]$ip_address cat /opt/awscam/out/ch1_out.h264 | mplayer –demuxer -</code></pre> <p>You can view the project stream (video after being processed by themodel on the DeepLens) over ssh with the command:</p> <pre><code>ssh [email protected]$ip_address cat /tmp/ssd_results.mjpeg | mplayer –demuxer lavf -lavfdopts format=mjpeg:probesize=32 -</code></pre> <p>Benefits of using ssh to view the video streams include:</p> <ul><li><p>You don’t need to have an extra monitor, keyboard, mouse, andmicro-HDMI adapter cable.</p></li> <li><p>You don’t need to locate the DeepLens close to a monitor, keyboard,mouse.</p></li> <li><p>You don’t need to be physically close to the DeepLens when you areviewing the video streams.</p></li></ul> <p>For those of us sitting on a couch with a laptop, a DeepLens acrossthe room, and no extra micro-HDMI cable, this is great news!</p> <h2 id="bonus">Bonus</h2> <p>To protect the security of your sensitive DeepLens video feeds:</p> <p></p> <ul><li><p>Use a long, randomly generated password for ssh on your DeepLens,even if you are only using it inside a private network.</p></li> <li><p>I would recommend <a href="https://help.ubuntu.com/community/SSH/OpenSSH/Keys#Transfer_Client_Key_to_Host">setting up .ssh/authorized_keys</a> on theDeepLens so you can ssh in with your personal ssh key, test it, then<a href="https://help.ubuntu.com/community/SSH/OpenSSH/Configuring#disable-password-authentication">disable password access</a> for ssh on the DeepLensdevice. Don’t forget the password, because it is still needed forsudo.</p></li> <li><p>Enable automatic updates on your DeepLens so that Ubuntu securitypatches are applied quickly. This is available as an option in the<a href="https://docs.aws.amazon.com/deeplens/latest/dg/deeplens-getting-started-set-up.html">initial setup</a>, and should be possible to do afterwards using thestandard Ubuntu <a href="https://help.ubuntu.com/lts/serverguide/automatic-updates.html">unattended-upgrades package</a>.</p></li></ul> <p>Unrelated side note: It’s kind of nice having the DeepLens runa standard Ubuntu LTS release. Excellent choice!</p> <p>Original article and comments: <a href="https://alestic.com/2017/12/aws-deeplens-video-stream-ssh/">https://alestic.com/2017/12/aws-deeplens-video-stream-ssh/</a></p><img alt="" height="1" src="http://feeds.feedburner.com/~r/alestic-planetubuntu/~4/_9EyszVpF1M" width="1" /></description> <pubDate>Sat, 30 Dec 2017 05:00:00 +0000</pubDate></item><item> <title>Serge Hallyn: GTD tools</title> <guid isPermaLink="false">http://s3hh.wordpress.com/?p=583</guid> <link>https://s3hh.wordpress.com/2017/12/29/gtd-tools/</link> <description><p>I’ve been using GTD to organize projects for a long time. The “tickler file” in particular is a crucial part of how I handle scheduling of upcoming and recurring tasks. I’ve blogged about some of the scripts I’ve written to help me do so in the past at <a href="https://s3hh.wordpress.com/2013/04/19/gtd-managing-projects/" rel="nofollow">https://s3hh.wordpress.com/2013/04/19/gtd-managing-projects/</a> and <a href="https://s3hh.wordpress.com/2011/12/10/tickler/" rel="nofollow">https://s3hh.wordpress.com/2011/12/10/tickler/</a>. This week I’ve combined these tools, slightly updated them, and added an install script, and put them on github at <a href="http://github.com/hallyn/gtdtools" rel="nofollow">http://github.com/hallyn/gtdtools</a>.</p><h2>Disclaimer</h2><p>The opinions expressed in this blog are my own views and not those of Cisco.</p><br /> <a href="http://feeds.wordpress.com/1.0/gocomments/s3hh.wordpress.com/583/" rel="nofollow"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/comments/s3hh.wordpress.com/583/" /></a> <img alt="" border="0" height="1" src="https://pixel.wp.com/b.gif?host=s3hh.wordpress.com&amp;blog=14017495&amp;post=583&amp;subd=s3hh&amp;ref=&amp;feed=1" width="1" /></description> <pubDate>Fri, 29 Dec 2017 21:23:25 +0000</pubDate></item><item> <title>Simos Xenitellis: Installing retdec on Ubuntu</title> <guid isPermaLink="false">https://blog.simos.info/?p=2063</guid> <link>https://blog.simos.info/installing-retdec-on-ubuntu/</link> <description>retdec (RETargetable DECompiler) is a decompiler, and it is the one that was released recently as open-source software by Avast Software. retdec can take an executable and work back into recreating the initial source code (with limitations). An example with retdec Let’s see first an example. Here is the initial source code, that was compiled … <p></p><p><a class="more-link btn" href="https://blog.simos.info/installing-retdec-on-ubuntu/">Continue reading</a></p></description> <pubDate>Fri, 29 Dec 2017 20:45:27 +0000</pubDate></item><item> <title>David Tomaschik: A Cheap and Compact Bench Power Supply</title> <guid isPermaLink="false">https://systemoverlord.com/2017/12/29/a-cheap-and-compact-bench-power-supply</guid> <link>https://systemoverlord.com/2017/12/29/a-cheap-and-compact-bench-power-supply.html</link> <description><p>I wanted a bench power supply for powering small projects and devices I’mtesting. I ended up with a DIY approach for around $30 and am very happy withthe outcome. It’s a simple project that almost anyone can do and is a greatintroductory power supply for any home lab.</p> <p>I had a few requirements when I set out:</p> <ul> <li>Variable voltage (up to ~12V)</li> <li>Current limiting (to protect against stupid mistakes)</li> <li>Small footprint (my electronics work area is only about 8 square feet)</li> <li>Relatively cheap</li></ul> <p>Initially, I considered buying an off the shelf bench power supply, but most ofthose are either very expensive, very large, or both. I also toyed with theidea of an ATX power supply as a bench power supply, but those don’t offercurrent limiting (and are capable of delivering enough current to destroy anyproject I’m careless with).</p> <p>I had seen a few DC-DC buck converter modules floating around, but most hadpretty bad reviews, until the Ruidong DPS series came out. These have quicklybecome quite popular modules, with support for up to 50V at 5A – a 250W powersupply! Because of the buck topology, they require a DC input at a highervoltage than the output, but that’s easily provided with another power supply.In my case, I decided to use cheap power supplies from electronic devices(commonly called “wall warts”). (I’m actually reusing one from an old router.)</p> <p>I’m far from the first to do such a project, but I still wanted to share as wellas describe what I’d like to do in the future.</p> <p><img alt="power supply" src="https://systemoverlord.com/img/blog/powersupply/outside.jpg" /></p> <p>This particular unit consists of a <a href="http://amzn.to/2lZ07wQ">DPS3005</a> that I gotfor about $25 from <a href="https://www.aliexpress.com/item/RD-DPS3005-Constant-Voltage-current-Step-down-Programmable-Power-Supply-module-buck-Voltage-converter-color-LCD/32684316119.html">AliExpress</a>.(The DPS5005 is now available on <a href="http://amzn.to/2lZ9ah7">Amazon with Prime</a>.Had that been the case at the time I built this, I likely would have gone withthat option.)</p> <p>I placed the power supply in <a href="http://amzn.to/2CLi3W2">a plastic enclosure</a> andadded <a href="http://amzn.to/2Ed0Xxl">a barrel jack</a> for input power, and added<a href="http://amzn.to/2CJ4WVh">5-way binding posts</a> for the output. This allows me toconnect banana plugs, breadboard leads, or spade lugs to the power supply.</p> <p><img alt="power supply inside" src="https://systemoverlord.com/img/blog/powersupply/inside.jpg" /></p> <p>Internally, I connected the parts with some 18 AWG red/black zip cord usingcrimped ring connectors on the binding posts, the screw terminals on the powersupply, and solder on the barrel jack. Where possible, the connections werecovered with heat shrink tubing.</p> <p>I used this power supply in developing my<a href="https://systemoverlord.com/2017/12/24/2017-christmas-ornament.html">Christmas Ornament</a>, and it worked atreat. It allowed me to simulate behavior at lower battery voltages (thoughnote that it is not a battery replacement – it does not simulate the internalresistance of a run down battery) and figure out how long my ornament was likelyto run, and how bright it would be as the battery ran down.</p> <p>I’ve also used it to power a few embedded devices that I’ve been using forsecurity research, and I think it would make a great tool for voltage glitchingin the future. (In fact, I saw Dr. Dmitry Nedospasov demonstrate a voltageglitching attack using a similar module at<a href="https://hardwaresecurity.training">hardwaresecurity.training</a>.)</p> <p>In the future, I’d like to build a larger version with an internal AC to DCpower supply (maybe a repurposed ATX supply) and either two or three of the DPSpower modules to provide output. Note that, due to the single AC to DC supply,they would <em>not</em> be isolated channels – both would have the same groundreference, so it would not be possible to reference them to each other. Formost use cases, this wouldn’t be a problem, and both channels <em>would</em> beisolated from mains earth if an isolated switching supply is used as the firststage power supply.</p></description> <pubDate>Fri, 29 Dec 2017 08:00:00 +0000</pubDate></item><item> <title>Stuart Langridge: OwnTracks and a map</title> <guid isPermaLink="false">tag://www.kryogenix.org/days,2017-12-28:2017/12/28/owntracks-and-a-map/</guid> <link>http://www.kryogenix.org/days/2017/12/28/owntracks-and-a-map/</link> <description><p>Every year we do a bit of a pub crawl in Birmingham between Christmas and New Year; a chance to get away from the turkey risotto, and hang out with people and talk about techie things after a few days away with family and so on. It’s all rather loosely organised — I tried putting exact times on every pub once and it didn’t work out very well. So this year, 2017, I wanted a map which showed where we were so people can come and find us — it’s a twelve-hour all-day-and-evening thing but nobody does the whole thing<sup id="sf-owntracks-and-a-map-1-back"><a class="simple-footnote" href="http://feeds.feedburner.com/kryogenix#sf-owntracks-and-a-map-1" title="well, except me. And hero of the revolution Andy Yates.">1</a></sup> so the idea is that you can drop in at some point, have a couple of drinks, and then head off again. For that, you need to know where we all are.</p><p>Clearly, the solution here is technology; I carry a device in my pocket<sup id="sf-owntracks-and-a-map-2-back"><a class="simple-footnote" href="http://feeds.feedburner.com/kryogenix#sf-owntracks-and-a-map-2" title="and you do too">2</a></sup> which knows where I am and can display that on a map. There are a few services that do this, or used to — Google Latitude, <span class="caps">FB</span> messenger does it, Apple find-my-friends — but they’re all “only people with the Magic Software can see this”, and “you have to use our servers”, and that’s not very web-ish, is it? What I wanted was a thing which sat there in the background on my phone and reported my location to <em>my</em> server when I moved around, and didn’t eat battery. That wouldn’t be tricky to write but I bet there’s a load of annoying corner cases, which is why I was very glad to discover that <a href="http://owntracks.org/">OwnTracks</a> have done it for me.</p><p>You install their mobile app (for Android or iOS) and then configure it with the <span class="caps">URL</span> of your server and every now and again it reports your location by posting <span class="caps">JSON</span> to that <span class="caps">URL</span> saying what your location is. Only one word for that: magic darts. Exactly what I wanted.</p><p>It’s a little tricky because of that “don’t use lots of battery” requirement. Apple heavily restrict background location sniffing, for lots of good reasons. If your app is the active app and the screen’s unlocked, it can read your location as often as it wants, but that’s impractical. If you want to get notified of location changes in the <em>background</em> on iOS then you only get told if you’ve moved more than 500 metres in less than five minutes<sup id="sf-owntracks-and-a-map-3-back"><a class="simple-footnote" href="http://feeds.feedburner.com/kryogenix#sf-owntracks-and-a-map-3" title="the OwnTracks docs explain this in more detail">3</a></sup> which is fine if you’re on the motorway but less fine if you’re walking around town and won’t move that far. However, you can nominate certain locations as “waypoints” and then the app gets notified whenever it enters or leaves a waypoint, even if it’s in the background and set to “manual mode”. So, I added all the pubs we’re planning on going to as waypoints, which is a bit annoying to do manually but works fine.</p><p>OwnTracks then posts my location to a tiny <span class="caps">PHP</span> file which just dumps it in a big <span class="caps">JSON</span> list. The <a href="https://kryogenix.org/brumtechxmas17/">#brumtechxmas 2017 map</a> then reads that <span class="caps">JSON</span> file and plots the walk on the map (or it will do once we’re doing it; as I write this, the event isn’t until tomorrow, Friday 29th December, but I have tested it out).</p><p>The map is an <span class="caps">SVG</span>, embedded in the page. This has the nice property that I can change it with <span class="caps">CSS</span>. In particular, the page looks at the list of locations we’ve been in and works out whether any of them were close enough to a pub on the map that we probably went in there… and then uses <span class="caps">CSS</span> to colour the pub we’re <em>in</em> green, and ones we’ve been in grey. So it’s dynamic! Nice and easy to find us wherever we are. If it works, which is a bit handwavy at this point.</p><p>If you’re coming, see you tomorrow. If you’re not coming: you should come. :-)</p><p><img alt="A static version of the map: you'll want the website for the real dynamic clever one" src="http://feeds.feedburner.com/images/brumtechxmas2017.png" title="A static version of the map: you'll want the website for the real dynamic clever one" /></p><ol class="simple-footnotes"><li id="sf-owntracks-and-a-map-1">well, except me. And hero of the revolution Andy Yates. <a class="simple-footnote-back" href="http://feeds.feedburner.com/kryogenix#sf-owntracks-and-a-map-1-back">↩</a></li><li id="sf-owntracks-and-a-map-2">and you do too <a class="simple-footnote-back" href="http://feeds.feedburner.com/kryogenix#sf-owntracks-and-a-map-2-back">↩</a></li><li id="sf-owntracks-and-a-map-3">the <a href="http://owntracks.org/booklet/features/location/#ios">OwnTracks docs</a> explain this in more detail <a class="simple-footnote-back" href="http://feeds.feedburner.com/kryogenix#sf-owntracks-and-a-map-3-back">↩</a></li></ol></description> <pubDate>Thu, 28 Dec 2017 11:33:00 +0000</pubDate></item><item> <title>Bryan Quigley: Working on a proposal</title> <guid isPermaLink="true">https://bryanquigley.com/posts/working-on-a-proposal.html</guid> <link>https://bryanquigley.com/posts/working-on-a-proposal.html</link> <description><p><a href="https://bryanquigley.com/pages/papers/ubuntu-monthly-update-cadence.html">Draft</a> of a proposal I'm working on.. <a href="https://gitlab.com/BryanQuigley/bryanquigley.com/tree/master/pages/papers">Feedback/improvements welcome</a></p></description> <pubDate>Thu, 28 Dec 2017 04:56:08 +0000</pubDate></item><item> <title>Lubuntu Blog: Lubuntu Seeds are now in Git</title> <guid isPermaLink="false">http://lubuntu.me/?p=2701</guid> <link>http://lubuntu.me/lubuntu-seeds-are-now-in-git/</link> <description>Lubuntu’s Development Team has decided to convert Lubuntu’s seeds to Git from Bazaar. So where you would see independent branches like this: https://code.launchpad.net/~lubuntu-dev/ubuntu-seeds/lubuntu.bionic You will now see different Git branches in one central Git repository: https://git.launchpad.net/~lubuntu-dev/ubuntu-seeds/+git/lubuntu If you prefer viewing things in Phabricator, we have mirrored it there too: http://phab.lubuntu.me/source/lubuntu-seed/ This change has been made […]</description> <pubDate>Thu, 28 Dec 2017 03:29:34 +0000</pubDate></item><item> <title>Valorie Zimmerman: The power we have as bystanders</title> <guid isPermaLink="false">tag:blogger.com,1999:blog-5432566687488141671.post-2658135303501408073</guid> <link>http://linuxgrandma.blogspot.com/2017/12/the-power-we-have-as-bystanders.html</link> <description>Bystander.<br /><br />It seems such a passive word for a passive role.<br /><br />Let's consider how it is instead a position of power.<br /><br />First, as a bystander, I can observe what is happening which nobody else sees, because nobody else is standing exactly where I am. Nobody else has my mix of genes and history and all of what makes me who I am and so I see uniquely.<br /><br />As bystanders each of us has power we often do not grasp. It is of the moment. We can plan, and prepare so that we are ready to act, intervene if necessary; build up potential energy. While remaining polite, I can step in to help, intervene, participate, engage. I can ACT.<br /><br />Pro-tip: run this program (courtesy of the <a href="https://linuxchix.org/" target="_blank">Linuxchix</a>:<br /><br />1. <i>be polite</i><br />2. <i>be helpful</i><br />3. <i>iterate</i><br /><br />Boom! You have a team.<br /><br />Supporting free software is one of the things I do. Right now is a great time to help support KDE.<br /><br /><b>KDE Powers You - You Can Power KDE, Too! </b><br /><br /><a href="https://www.kde.org/fundraisers/yearend2017/">https://www.kde.org/fundraisers/yearend2017/</a><br /><br /></description> <pubDate>Wed, 27 Dec 2017 23:16:42 +0000</pubDate> <author>[email protected] (Valorie Zimmerman)</author></item><item> <title>Clive Johnston: Chromecast your video collection from Dolphin</title> <guid isPermaLink="false">https://clivejo.com/?p=477</guid> <link>https://clivejo.com/chromecast-your-video-collection-from-dolphin/</link> <description><p>Over the Christmas period I had a need to watch some videos from my laptop on my TV via Chromecast. I once again tried my faithful old VLC player which according to the website should support casting in the latest release. But alas, Chromecast is disabled:</p><pre class="changelog" id="vlc_3.0.0~rc2-2ubuntu2"> * No change rebuild to add some information about why we disable chromecast support: it fails to build from source due to protobuf/mir: - <a href="https://trac.videolan.org/vlc/ticket/18329">https://trac.videolan.org/vlc/ticket/18329</a> - <a href="https://github.com/google/protobuf/issues/206">https://github.com/google/protobuf/issues/206</a></pre><p>Source : <a href="https://launchpad.net/ubuntu/+source/vlc/3.0.0~rc2-2ubuntu2">https://launchpad.net/ubuntu/+source/vlc/3.0.0~rc2-2ubuntu2</a></p><p>Then I came across ‘<a href="https://github.com/xat/castnow">castnow</a>‘ which is a CLI based app to stream a mp4 file to your chromecast device. You can see the code here – <a href="https://github.com/xat/castnow">https://github.com/xat/castnow</a></p><p>To install, I needed the node package manager (npm), to do this on my system I run</p><p><code>sudo apt install npm</code></p><p>Then using npm you can install it by:</p><p><code>sudo npm install castnow</code></p><p>This will install the tool. Instructions for use are here – <a href="https://github.com/xat/castnow/blob/master/README.md">https://github.com/xat/castnow/blob/master/README.md</a></p><p>Now if you are like me and use the Plasma Desktop, there is now an addon to Dolphin menu which allows you to start the cast directly from Dolphin <img alt="🙂" class="wp-smiley" src="https://s.w.org/images/core/emoji/2.3/72x72/1f642.png" style="height: 1em;" /></p><p>In a dolphin window go to Settings &gt; Configure Dolphin. In the Services pane click the “Download New Services” button. In the search box look for “cast” and install “Send to Chromecast” by Shaddar.</p><p><a href="https://clivejo.com/wp-content/uploads/2017/12/Add_New_Service_Dolphin.png"><img alt="" class="size-full wp-image-478 aligncenter" height="527" src="https://clivejo.com/wp-content/uploads/2017/12/Add_New_Service_Dolphin.png" width="711" /></a></p><p>Now all you have to do is browse your collection of mp4 videos and use the Dolphin menu to play it on your Chromecast device, pretty handy! I will certainly enjoy the holidays with this feature, with my favourite movies on full size HD screen.</p><p><a href="https://clivejo.com/wp-content/uploads/2017/12/My_Little_Pony.png"><img alt="" class="size-full wp-image-480 aligncenter" height="718" src="https://clivejo.com/wp-content/uploads/2017/12/My_Little_Pony.png" width="907" /></a></p><p> </p></description> <pubDate>Wed, 27 Dec 2017 16:21:49 +0000</pubDate></item><item> <title>David Tomaschik: Even With the Cloud, Client Security Still Matters</title> <guid isPermaLink="false">https://systemoverlord.com/2017/12/27/even-with-the-cloud-client-security-still-matters</guid> <link>https://systemoverlord.com/2017/12/27/even-with-the-cloud-client-security-still-matters.html</link> <description><p><strong>As usual, this post does not necessarily represent the views of my employer(past, present, or future).</strong></p> <p>It’s Friday afternoon and the marketing manager receives an email with the newprinted material proofs for the trade show. Double clicking the PDF attachment,his PDF reader promptly crashes.</p> <p>“Ugh, I’m gonna have to call IT again. I’ll do it Monday morning,” he thinks,and turns off his monitor before heading home for the weekend.</p> <p>Meanwhile, in a dark room somewhere, a few lines appear on the screen of alaptop:</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code><table class="rouge-table"><tbody><tr><td class="rouge-gutter gl"><pre class="lineno">123456789101112</pre></td><td class="rouge-code"><pre>[*] Sending stage (205891 bytes) to 10.66.60.101[*] Meterpreter session 1 opened (10.66.60.100:4444 -&gt; 10.66.60.101:49159) at 2017-12-27 16:29:13 -0800msf exploit(multi/handler) &gt; sessions 1[*] Starting interaction with 1...meterpreter &gt; sysinfoComputer : INHUMAN-WIN7OS : Windows 7 (Build 7601, Service Pack 1).Architecture : x64System Language : en_USDomain : ENTERPRISELogged On Users : 2Meterpreter : x64/windows</pre></td></tr></tbody></table></code></pre></div></div> <p>Finally, the hacker had a foothold. He started exploring the machine remotely.First, he used <a href="https://github.com/gentilkiwi/mimikatz">Mimikatz</a> to dump thepassword hashes from the local system. He sent the hashes to his computer with8 NVidia 1080Ti graphics cards to start cracking, and then kept exploring thefilesystem of the marketing manager’s computer. He grabbed the browsing historyand saved passwords from the browser, and noticed access to a company directory.He started a script to download the entire contents through the meterpretersession. He started to move on to the network shares when his password crackingrig flashed a new result.</p> <p>“That was fast,” he thought, looking over at the screen. “SuperS3cr3t isn’tmuch of a password.” He used the password to log in to the company’s webmailand forwarded the “proofs” (in fact a PDF exploiting a known bug in the PDFreader) to one of the IT staffers with a message asking them to take a look atwhy it wouldn’t render.</p> <p>Dissatisfied with waiting until the next week for an IT staffer to open themalicious PDF, he started looking for another option. He began by using hisaccess to a single workstation to look for other computers that were vulnerableto some of the most recent publicly known exploits. Surprisingly, he found twomachines that were vulnerable to<a href="https://www.rapid7.com/db/modules/exploit/windows/smb/ms17_010_eternalblue">MS17-010</a>.He sent the exploit through his exisiting meterpreter session and crossed hisfingers.</p> <p>Moments later, he was rewarded with a second Meterpreter session. Lookingaround, he was quickly disappointed to realize this machine was freshlyinstalled and so would not contain sensitive information or be hostinginteresting applications. However, after running Mimikatz again, he discoveredthat another one of the IT staff had logged into this machine, probably as partof the setup process.</p> <p>He threw the hashes into his password cracking rig again and started looking foranything else interesting. In a few minutes, he realized this machine wasdevoid of anything but a basic Windows setup – not even productivityapplications had been installed yet. He returned to the original host andlooked for anything good, but only found a bunch of marketing materials thatwere basically public information.</p> <p>Frustrated, he banged on his keyboard until he remembered the scraped companydirectory. He went and looked at the directory information for the IT stafferand realized it not only included names and contact informatuon for employees,but also allowed employees to include information about hobbies and interests,plus birthdays and more. He took the data from the IT staffer, split it up intoall the included words, and placed it into a wordlist for his password crackingrig. Hoping that would get him somewhere, he went for a Red Bull.</p> <p>When he came back, he saw another result on his password cracker. Thissurprised him slightly, because he had expected more of an IT staffer. He waseven more surprised when he saw that the password was “Snowboarding2020!”Though it met all the company’s password complexity requirements, it was stillan incredibly weak password by modern standards.</p> <p>Using this new found password, he logged into the workstation belonging to theIT engineer. He dumped the local hashes to look for further pivotingopportunities, but found only the engineer’s own password hash. As he startedexploring the filesystem, however, he found many more interesting options. Hequickly located an SSH private key and several text files containing AWS APIkeys. It only took a little bit of investigation to realize that one of the AWSAPI keys was a root API key for the company’s production environment.</p> <p>Using the API key, he logged in to the AWS account and quickly identified thevirtual machines running the company’s database servers containing usercredentials and information. He connected with the API keys he had and starteddumping the usernames and password hashes. Given that the hashes were unsaltedSHA-1, he figured it shouldn’t take long for his password cracking rig to workthrough them.</p> <p>A day later, he was posting an offering for the plaintext credential databasefor just a fraction of a bitcoin per customer. Satisfied, he started huntingfor the next vulnerable enterprise.</p> <hr /> <p>While the preceeding story was fiction, it’s an all too-common reality. Manymodern enterprises have put considerable effort into hardening their datacenter(be it virtualized or physical) but very little effort into hardeningworkstations. I often work with companies that seem to believe placing theirapplications into the cloud is a security panacea. While the cloud offersnumerous security benefits – major cloud providers have invested heavily intosecurity, monitor their networks 24/7, and a cloud service is clearly heavilysegregated from the corporate network – it does not solve all securityproblems.</p> <p>An attacker who is able to compromise a workstation is able to do anything thata legitimate user of that workstation would be able to do. In the exampleabove, the AWS keys stored on a workstation proved critical to gaining access toa treasure trove of user information, but even a lower level of access can beuseful to an attacker and dangerous to your company.</p> <p>The <a href="http://www.verizonenterprise.com/verizon-insights-lab/dbir/2017/">2017 Verizon DBIR</a>provides data to support this. 66% of malware began with malicious emailattachments (client-based), 81% of breaches involved stolen credentials(pivoting), and 43% of attacks involved social engineering (tactics againstlegitimate users).</p> <p>Imagine the you have customer service representatives who log in to anapplication hosted in the cloud to process refunds or perform other services.An attacker with access to a customer service workstation might be able to grabtheir username and password (or saved cookies from the browser) and then use itto buy expensive items and refund them to themselves. (Or change deliveryaddresses, issue store credits, or other costly expenditures.)</p> <p>In a hospital, compromising a workstation used by doctors and nurses would lead,at a minimum, to a major HIPAA breach. In the worst case, it could be used tomodify patient records or order medications what could be dangerous or fatal toa patient. Each environment needs to consider the risks posed by the accessgranted from their workstations and clients.</p> <p>Attackers will take the easiest route to the data they seek. If you’ve spentsome effort on hardening your servers (or applications in the cloud), that maywell be through the workstation or client. Consider all entry points in yoursecurity strategy.</p></description> <pubDate>Wed, 27 Dec 2017 08:00:00 +0000</pubDate></item><item> <title>Jonathan Carter: Hello, world! – Welcome to my Linux related videos</title> <guid isPermaLink="false">https://jonathancarter.org/?p=8922</guid> <link>https://jonathancarter.org/2017/12/24/hello-world-welcome-to-my-linux-related-video-channel/</link> <description><p>I’ve been meaning to start a video channel for years. This is more of a test video than anything else, but if you have any ideas or suggestions, then don’t hesitate to comment.</p><img alt="" height="0" src="https://analytics.bluemosh.com/piwik.php?idsite=2&amp;rec=1&amp;url=https%3A%2F%2Fjonathancarter.org%2F2017%2F12%2F24%2Fhello-world-welcome-to-my-linux-related-video-channel%2F&amp;action_name=Hello%2C+world%21+%26%238211%3B+Welcome+to+my+Linux+related+videos&amp;urlref=https%3A%2F%2Fjonathancarter.org%2Ffeed%2F" style="border: 0; width: 0; height: 0;" width="0" /></description> <pubDate>Sun, 24 Dec 2017 18:13:13 +0000</pubDate></item><item> <title>Sebastian Dröge: GStreamer Rust bindings release 0.10.0 & gst-plugin release 0.1.0</title> <guid isPermaLink="false">https://coaxion.net/blog/?p=515</guid> <link>https://coaxion.net/blog/2017/12/gstreamer-rust-bindings-release-0-10-0-gst-plugin-release-0-1-0/</link> <description><p>Today I’ve released version 0.10.0 of the <a href="https://rust-lang.org" rel="noopener" target="_blank">Rust</a> <a href="https://gstreamer.freedesktop.org" rel="noopener" target="_blank">GStreamer</a> <a href="https://crates.io/crates/gstreamer" rel="noopener" target="_blank">bindings</a>, and <a href="https://coaxion.net/blog/2016/05/writing-gstreamer-plugins-and-elements-in-rust/" rel="noopener" target="_blank">after a journey of more than 1½ years</a> the first release of the GStreamer plugin writing infrastructure crate <a href="https://crates.io/crates/gst-plugin" rel="noopener" target="_blank">“gst-plugin”</a>.</p><p>Check the repositories<a href="https://github.com/sdroege/gstreamer-rs" rel="noopener" target="_blank">¹</a><a href="https://github.com/sdroege/gst-plugin-rs" rel="noopener" target="_blank">²</a> of both for more details, the code and various examples.</p><h4>GStreamer Bindings</h4><p>Some of the changes since the 0.9.0 release were already outlined in the previous blog post, and most of the other changes were also things I found while writing GStreamer plugins. For the full changelog, take a look at the <a href="https://github.com/sdroege/gstreamer-rs/blob/master/gstreamer/CHANGELOG.md#0100---2017-12-22" rel="noopener" target="_blank">CHANGELOG.md</a> in the repository.</p><p>Other changes include</p><ul><li>I went over the whole API in the last days, added any missing things I found, simplified API as it made sense, changed functions to take <i>Option&lt;_&gt;</i> if allowed, etc.</li><li>Bindings for <a href="https://sdroege.github.io/rustdoc/gstreamer/gstreamer/struct.SliceTypeFind.html#method.type_find" rel="noopener" target="_blank">using</a> and <a href="https://sdroege.github.io/rustdoc/gstreamer/gstreamer/struct.TypeFind.html#method.register" rel="noopener" target="_blank">writing</a> typefinders. Typefinders are the part of GStreamer that try to guess what kind of media is to be handled based on looking at the bytes. Especially writing those in Rust seems worthwhile, considering that basically all of the <a href="https://cgit.freedesktop.org/gstreamer/gst-plugins-base/log/gst/typefind/gsttypefindfunctions.c" rel="noopener" target="_blank">GIT log</a> of the existing typefinders consists of fixes for various kinds of memory-safety problems.</li><li>Bindings for the <a href="https://sdroege.github.io/rustdoc/gstreamer/gstreamer/struct.Registry.html" rel="noopener" target="_blank">Registry</a> and PluginFeature were added, as well as fixing the relevant API that works with paths/filenames to actually work on <a href="https://doc.rust-lang.org/std/path/struct.Path.html" rel="noopener" target="_blank">Paths</a></li><li>Bindings for the GStreamer Net library were added, allowing to build applications that synchronize their media of the network by using <a href="https://sdroege.github.io/rustdoc/gstreamer/gstreamer_net/struct.PtpClock.html" rel="noopener" target="_blank">PTP</a>, <a href="https://sdroege.github.io/rustdoc/gstreamer/gstreamer_net/struct.NtpClock.html" rel="noopener" target="_blank">NTP</a> or a <a href="https://sdroege.github.io/rustdoc/gstreamer/gstreamer_net/struct.NetClientClock.html" rel="noopener" target="_blank">custom</a> GStreamer protocol (for which there also exists a <a href="https://sdroege.github.io/rustdoc/gstreamer/gstreamer_net/struct.NetTimeProvider.html" rel="noopener" target="_blank">server</a>). This could be used for building video-walls, systems recording the same scene from multiple cameras, etc. and provides (depending on network conditions) up to &lt; 1ms synchronization between devices.</li></ul><p>Generally, this is something like a “1.0” release for me now (due to depending on too many pre-1.0 crates this is not going to be 1.0 anytime soon). The basic API is all there and nicely usable now and hopefully without any bugs, the known-missing APIs are not too important for now and can easily be added at a later time when needed. At this point I don’t expect many API changes anymore.</p><h4>GStreamer Plugins</h4><p>The other important part of this announcement is the first release of the <a href="https://crates.io/crates/gst-plugin" rel="noopener" target="_blank">“gst-plugin”</a> crate. This provides the basic infrastructure for writing GStreamer plugins and elements in Rust, without having to write any unsafe code.</p><p>I started experimenting with using Rust for this more than 1½ years ago, and while a lot of things have changed in that time, this release is a nice milestone. In the beginning there were no GStreamer bindings and I was writing everything manually, and there were also still quite a few pieces of code written in C. Nowadays everything is in Rust and using the automatically generated GStreamer bindings.</p><p>Unfortunately there is no real documentation for any of this yet, there’s only the autogenerated rustdoc documentation available from <a href="https://sdroege.github.io/rustdoc/gst-plugin/gst_plugin/" rel="noopener" target="_blank">here</a>, and various example GStreamer plugins inside the <a href="https://github.com/sdroege/gst-plugin-rs" rel="noopener" target="_blank">GIT repository </a>that can be used as a starting point. And various people already wrote their GStreamer plugins in Rust based on this.</p><p>The basic idea of the API is however that everything is as Rust-y as possible. Which might not be too much due to having to map subtyping, virtual methods and the like to something reasonable in Rust, but I believe it’s nice to use now. You basically only have to implement one or more traits on your structs, and that’s it. There’s still quite some boilerplate required, but it’s far less than what would be required in C. The best example at this point might be the <a href="https://github.com/sdroege/gst-plugin-rs/blob/master/gst-plugin-audiofx/src/audioecho.rs" rel="noopener" target="_blank">audioecho</a> element.</p><p>Over the next days (or weeks?) I’m not going to write any documentation yet, but instead will write a couple of very simple, minimal elements that do basically nothing and can be used as starting points to learn how all this works together. And will write another blog post or two about the different parts of writing a GStreamer plugin and element in Rust, so that all of you can get started with that.</p><p>Let’s hope that the number of new GStreamer plugins written in C is going to decrease in the future, and maybe even new people who would’ve never done that in C, with all the footguns everywhere, can get started with writing GStreamer plugins in Rust now.</p></description> <pubDate>Fri, 22 Dec 2017 16:52:21 +0000</pubDate></item><item> <title>Ubuntu Podcast from the UK LoCo: S10E42 – Tangy Orange Chairs - Ubuntu Podcast</title> <guid isPermaLink="false">https://ubuntupodcast.org/?p=1165</guid> <link>http://ubuntupodcast.org/2017/12/21/s10e42-tangy-orange-chairs/</link> <description><p>This week we get confy in a new chair, conduct our Perennial Podcast Prophecy Petition Point and go over your feedback. This is the final show of the season and we’ll now be taking a couple of months break to eat curry, have a chat and decide if we’ll be returning for Season 11.</p> <p>It’s Season Ten Episode Forty-Two of the Ubuntu Podcast! <a href="https://twitter.com/popey" title="popey on Twitter">Alan Pope</a>, <a href="https://twitter.com/marxjohnson" title="Mark on Twitter">Mark Johnson</a> and <a href="https://twitter.com/m_wimpress" title="Martin on Twitter">Martin Wimpress</a> are connected and speaking to your brain.</p><p>In this week’s show:</p><ul><li>We discuss what we’ve been up to recently:<ul><li>Martin has bought a <a href="https://secretlab.co/products/titan">Secretlab TITAN</a> chair. It is very comfy.</li></ul></li></ul><h2>We review our 2017 predictions:</h2><h3>Alan</h3><ul><li>Multiple devices from Tier one vendors will ship with snappy by default (like Dell, HP, Cisco) “top line big name vendors will ship hardware with Ubuntu snappy as a default OS”<ul><li>No</li></ul></li><li>GitHub will downsize their 600 workforce to a much smaller number and may also do something controversial to raise funds<ul><li>No – 723 according to wikipedia</li></ul></li><li>Microsoft will provide a Linux build of a significant application – possibly exchange or sharepoint<ul><li>No?</li></ul></li><li>Donald Trump will not last a year as president<ul><li>Sadly not.</li></ul></li></ul><h3>Mark</h3><ul><li>There will be no new Ubuntu phone on sale in 2017<ul><li>Yes</li></ul></li><li>The UK government will lose a court case related to the Investigatory Powers Act<ul><li><a href="http://www.theregister.co.uk/2016/12/21/eu_judgment/">Yes</a> and <a href="https://www.theregister.co.uk/2017/11/30/investigatory_powers_act_illegal_under_eu_law/">Yes</a>.</li></ul></li><li>This time next year, one of the top 5 distros on Distrowatch will be a distro that isn’t currently in the top 20.<ul><li>No</li></ul></li></ul><h3>Martin</h3><ul><li>Ubuntu 17.10 will be able to run Mir using the proprietary nvidia drivers and Steam will work reliably via XMir. It will also be possible to run Mir in Virtualbox.<ul><li>No</li></ul></li><li>A high profile individual (or individuals) will fall victim to one of the many privacy threats introduced as a result of the Investigatory Powers Bill. Intimate details of their online life will be exposed to the world, compiled from one of more databases storing Internet Connection Records. The disclosure will possibly have serious consequences for the individuals concerned, such as losing their job or being professionally discredited.<ul><li>No</li></ul></li><li>The hype surrounding VR will build during 2017 but Virtual Reality will continue to lack adoption. Sales figures will be well below market projections.<ul><li><a href="https://arstechnica.com/gaming/2017/11/more-than-a-fad-vr-headset-sales-are-slowly-creeping-higher/">Maybe</a>?</li></ul></li></ul><h2>We make our prediction for 2018:</h2><h3>Alan</h3><ul><li>A large gaming hardware vendor from the past will produce new hardware. Someone of the size/significance of Sega. Original hardware, not just re-using the brand-name, but official product.</li><li>Valve will rev the steamlink, perhaps making it more powerful for 4K gaming, and maybe a minor bump to the steam controller too</li><li>A large government UK body will accidentally leak a significant body of data. Could be laptop/USB stick on a train or website hack.</li></ul><h3>Mark</h3><ul><li>Either the UK or US government will collapse</li><li>A major hardware manufacturer (not a crowd funder) will release a device in the form factor of a GPD pocket</li><li>I will specifically buy (i.e. not in a Humble Bundle) and play through a native Linux game that is initially released in 2018.</li><li>Canonical will go public and suffer a hostile takeover by the shuffling corpse of SCO. <em>(bonus prediction)</em></li></ul><h3>Martin</h3><ul><li>Give or take a couple of thousand dollars, BitCoin will have the same US dollar value in December 2018 as it does today.<ul><li>17205.63 US Dollar per btc at the time of recording.</li></ul></li><li>A well established PC OEM, not currently supporting Linux, will offer a pre-installed Linux distro option for their flagship products.</li><li>Four smart phones will launch in 2018 that cost $1000 or more, thanks to Apple normalising this ludicrous price tag in 2017.</li></ul><h2>Ubuntu Podcast listenrs share their predictions for 2018:</h2><ul><li>Simon Butcher – The Queen piles into bitcoin and loses her fortune when bitcoin collapses to 10p</li><li>Jezra – Someone considers open sourcing a graphics driver for chip that works with ARM, and then doesn’t</li><li>Ian – Canonical will be bought out by Ubuntu Mate.</li><li>Mattias Wernér – I predict a new push for SteamOS and Steam Machines with a serious marketing effort behind it.</li><li>Jon Spriggs – I think we’ll see Etherium value exceeding £2,000 before 1st December 2018 (Currently £476 on Coinbase.com). Litecoin will cross £1,000 before 1st Dec 2018 (currently £286)</li><li>Eddie Figgie – Bitcoin falls below $1k US.</li><li>McPhail – I saw the call for 2018 predictions. I predict that command line snaps will run natively in Windows and some graphical snaps will run too</li><li>Leo Arias – Costa Rica wins the FIFA world cup.</li><li>Ivan Pejić – India will ship RISC-V based Ubuntu netbook/tablet/phone.</li><li>Sachin Saini* – Solus takes over the world.</li><li>Laura Czajkowski and Joel J – Year of the (mainstream) Linux desktop <img alt="😁" class="wp-smiley" src="https://s.w.org/images/core/emoji/2.3/72x72/1f601.png" style="height: 1em;" /></li><li>Adam Eveleigh – snappy/Flatpak/AppImage(Update/d) will gain more traction as people realize that it solves the stable-for-noobs vs rolling dilemma once and for all. Which of the three will go furthest? Despite being in the snappy camp, I bet Flatpak</li><li>Marius Gripsgard – Ubuntu touch world domination</li><li>Jan Sprinz – Ubuntu Touch will rebase to 16.04</li><li>Ian – Canonical will IPO</li><li>Simon Butcher – Bitcoins go to £500,000 and the whole brexit divorce bill is funded by a stash of bc found on Gordon Brown’s old laptop</li><li>Conor Murphy – Linux Steam Integration snap will get wide adoption. Over 30% of all steam installs on linux</li><li>Jon Spriggs – RPi 4 with either with MOAR MEMORY or Gig Ethernet.</li><li>Mattias Wernér – I’ll predict that bitcoin will hit six figures in 2018. To be more specific, the six figures will be in dollars.</li><li>Jon Spriggs – I predict there will be an OggCamp ’18 😉</li><li>Laura Czajkowski – Microsoft will buy Canonical</li><li>Mortiz – Pipewire will be included in at least two major distros.</li><li>Daniel Llewelyn – Snaps will become the defacto standard and appimages and flatpaks will continue to be ignored</li><li>Jezra – Samsung ports Tizen to another device that is not a Samsung Phone.</li><li>Badger – Sound will finally work on cherry trail processors</li><li>Justin – Ubuntu Podcast to return for an eleventh season 🙂<p></p></li><li><p>And we go over all your amazing feedback – thanks for sending it – please keep sending it!</p></li><li><p>This weeks cover image is taken from <a href="https://upload.wikimedia.org/wikipedia/commons/f/fb/Orange_polyprop_chairs.jpg" rel="magnific">Wikimedia</a>.</p></li></ul><p>That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to <a href="mailto:[email protected]">[email protected]</a> or <a href="http://twitter.com/ubuntupodcast" title="Ubuntu Podcast on Twitter">Tweet us</a> or <a href="http://www.facebook.com/UbuntuUKPodcast" title="Ubuntu Podcast on Facebook">Comment on our Facebook page</a> or <a href="https://plus.google.com/+ubuntupodcast" title="Ubuntu Podcast on Google+">comment on our Google+ page</a> or <a href="http://www.reddit.com/r/UbuntuPodcast/" title="Ubuntu Podcast on Reddit">comment on our sub-Reddit</a>.</p><ul><li>Join us in the <a href="http://ubuntupodcast.org/telegram/" title="Ubuntu Podcast Chatter group on Telegram">Ubuntu Podcast Chatter</a> group on <a href="https://telegram.org/" title="Telegram">Telegram</a></li></ul></description> <pubDate>Thu, 21 Dec 2017 15:30:28 +0000</pubDate> <enclosure url="http://static.ubuntupodcast.org/ubuntupodcast/s10/e42/ubuntupodcast_s10e42.mp3" length="36639483" type="audio/mpeg"/></item><item> <title>Arthur Schiwon: The Story of Auto-Completion in Nextcloud Comments</title> <guid isPermaLink="false">http://www.arthur-schiwon.de/115 at http://www.arthur-schiwon.de</guid> <link>http://www.arthur-schiwon.de/story-auto-completion-nextcloud-comments</link> <description><div class="field field-name-field-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img alt="Photo by &lt;a href=&quot;https://unsplash.com/photos/i5Crg4KLblY?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText&quot;&gt;andrew welch&lt;/a&gt; on &lt;a href=&quot;https://unsplash.com/?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText&quot;&gt;Unsplash&lt;/a&gt;" height="289" src="http://www.arthur-schiwon.de/sites/default/files/andrew-welch-116539.jpg" width="640" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><h2>AutoCompletion in Nextcloud's Commenting Feature</h2> <p>For a long time it is already possible to leave comments on files. With Nextcloud 11 we provide a way to mention users in comments, so that a notification is created for the respective individual. This was never really advertised, however, because it was lacking an auto-completion tool that offers you a type-ahead selection of users. The crucial point is that user IDs are neither known to the end user nor exposed to them. For instance, if users are provided by LDAP, it might look like "1d5566a5-87e6-4451-bd2f-e0e6ba5944d9". Nobody wants to see this and the average person also will not memorize it :)</p> <p>It would be sad to see the functionality rot away being unseen in a dark corner. With Nextcloud 13 the time was ripe to finally include this missing bit, which is actually pretty fundamental and every application that allows text based communication amongst multiple people ships it.</p> <h3>The Plan</h3> <p>As a first step, I drafted a spec, consisting of three parts, subject to Nextcloud's layers. Let's start from the user facing aspects and get down in the stack:</p> <ol> <li> <p><strong>Web UI / Frontend</strong></p> <p>The requirements are to request the user items for auto completion, offering the elements to find, pick and insert the mention and also to render it beautifully and in a consistent way. While talking to the server and rendering where to be done with means we had in place, for presentation and interaction I picked up the <a href="https://github.com/ichord/At.js">At.js</a> plugin for jQuery. It can be adjusted and themed nicely, offers access points where they are needed and is just working pleasantly.</p> <p>One crucial point for the user experience is to have the results as quick as possible at hand, so that the author is as little as possible disturbed when writing the comment. The first idea was to pull in all possible users up front, but obviously this does not scale. In the implementation we pull them on demand, and in that regard I was also working on improving the performance of the LDAP backend to gain a positive user experience.</p> </li> <li> <p><strong>Web API Endpoint</strong></p> <p>This is the interface between the Web UI and the server. What the endpoint essentially will do is to gather the people from its sources, optionally let them be sorted, and send the results back to the client. Since it does not provide anything explicit for the Comments app, it ought to be a public and reusable API.</p> <p>Under the hood, I intended it to get the users from the infamous <code class="inline">sharee</code> endpoint specific to the file sharing app. No urge to reinvent wheels, right? However, it turned out that I needed to round out this wheel a little bit. More about this later.</p> </li> <li> <p><strong>Server PHP API</strong></p> <p>Provided our API endpoint can retrieve the users only one aspect is missing that needs to be added as an API. Since the newly crafted web endpoint is independent from the Comments app, also this component is. The service called <code class="inline">AutoCompleteManager</code> is supposed to take sorter plugin registrations and provide a method to run a result set through them.</p> <p>The idea is that persons that are most likely to be mentioned are pushed to the top of the result list. Those are identified by: people that have access to the file, people that were already commenting on the file or people the author interacted with. The necessary information are specific to other apps (<code class="inline">comments</code> and <code class="inline">files_sharing</code> in our case), hence they should provide a plugin. The API endpoint then only asks the service to run the results through the specified sorters.</p> </li></ol> <p>The original plan is placed in the <a href="https://github.com/nextcloud/server/issues/2443">description of issue #2443</a> and contains more details, but be aware it does not reflect the final implementation.</p> <p>Being a backend-person I started the implementation bottom up. The first to-do however was not working on the components mentioned above, but axe-shaping. The chosen source of persons for auto-completion, the <code class="inline">sharee</code> endpoint, has had all its logic in the Controller. This simply means that it does not belong to the public API scope (<code class="inline">\OCP\</code> namespace) and was also designed to be consumed from the web interface. In short: refactoring!</p> <h3>The <code class="inline">sharees</code> endpoint</h3> <p>The <code class="inline">sharee</code> endpoint is a real treasure. The file sharing code requests it to gather instance users, federated users, email addresses, circles and groups that are available for sharing. It is a pretty useful method, in fact not only for file sharing itself. Despite being not an official API other apps are making use of it. One example is <a href="https://apps.nextcloud.com/apps/deck">Deck</a> which uses it for sharing suggestions, too.</p> <p>On the server side we added a service to search through all the previously mentioned users, groups, circles, etc. Let us call them collaborators, to have a single, short term. The service is <code class="inline">OCP\Collaboration\Collaborators\ISearch</code> and offers two methods: one for searching, the other for registering plugins. Those plugins are the providers (sources) of these collaborators and <code class="inline">search()</code> delegates the query to each of them which is registered for the requested <code class="inline">shareTypes</code> (the technical term for sorts of collaborators). Also for backward compatibility (and to keep the changes in a certain range) the result follows the array based format used in the <code class="inline">files_sharing</code> app's controller, and an indicator whether to expect more results. Internally the introduced <code class="inline">ISearchResult</code> is being used to collect and manage the results from the providing plugins.</p> <p>Each provider has to implement <code class="inline">ISearchPlugin</code> and more than one provider can be registered for each <code class="inline">shareType</code>. The existing logic was ported to each own provider, most residing in the server itself (because they themselves utilize server components) and additionally the <code class="inline">circlesPlugin</code> into the <a href="https://apps.nextcloud.com/apps/circles">Circles app</a>. Some glue code was necessary for registration. The apps announce the providers in their info.xml, which is then automatically fed to the register method on app load. The App Store's <a href="https://github.com/nextcloud/appstore/pull/521">XSD schema was adjusted</a> accordingly, so it will accept such apps.</p> <p>The original <code class="inline">sharee</code> controller from the<code class="inline">files_sharing</code> app lost almost 600 lines of code and is now consuming the freshly formed PHP API. I cannot stress enough how good it is to have tests in place, which assure that everything still works as expected after refactoring (even if they need to be moved around). The refactor went in in <a href="https://github.com/nextcloud/server/pull/6328">pull request #6328</a>. The adaption for the Circles app went in PRs <a href="https://github.com/nextcloud/circles/pull/126">#126</a>, <a href="https://github.com/nextcloud/circles/pull/135">#135</a> and #136.</p> <h3>Backend efforts</h3> <p>Now that fundamentals are brought into shape, I was able to create the new auto completion endpoint and the services it depends on. The new <code class="inline">AutoCompleteController</code> responds to GET requests, and accepts a wide range of parameters of which only one is required and the others are optional. First, it requests instance users (defined by parameter) from the collaborator search. It merges the exact matches with the regular ones in one array, with exact matches (not just substrings) being on top. Then, if any sorter was defined by parameter, the auto-complete manager pipes the results through them. Finally, the sorted array is transformed into a simpler format that contains its ids, the end-user display name, and its source type (<code class="inline">shareType</code>) before being send back to the browser.</p> <p>Auto-complete manager? Yes, <code class="inline">\OCP\Collaboration\AutoComplete\IManager</code>, forming the Server API aspect, was also introduced. It does not divert from the spec and is not difficult or special in any way.</p> <p>Of course the sorters, especially the public interface <code class="inline">ISorter</code>, was also introduced, as well as the required info.xml glue. Two apps were equipped with sorter plugins: <code class="inline">comments</code> pushes people that commented on a file to the top and <code class="inline">files_sharing</code> puts people with access to the file on first.</p> <h3>Serving Layer 8</h3> <p>Having the foundations laid, the Web GUI only needs to use it. The first step I did was to ship the At.js plugin for jQuery and connect it to the API endpoint. Easy, until I realized that we fundamentally need to change the comment input elements for writing new as well as editing comments. For one, it does not provide all amenities feature-wise (sadly I do not remember what exactly), secondly HTML markup will not be formatted which we require to hide the user id behind the avatar and display name. It is the first time I heard about the <a href="https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement/contentEditable">contentEditable</a> attribute. That's cool, and mostly what was necessary to do was replacing <code class="inline">&lt;input&gt;</code> with <code class="inline">&lt;div contentEditable="true"&gt;</code>, applying the styles, changing a little bit of code for dealing with the value… and figuring out that you even can paste HTML into it! Now, this is handled as well.</p> <p>Rendering was a topic, since we always want to show the mention in a nice way to the end user, even when editing. Mind, we send the plain text comment containing the user id to the server. The client is responsible for rendering the contents (the server sends the extracted mentions and the corresponding display name as meta data with the comment). A bit more work was to ensure that line breaks were kept when switching between the plain and rich forms.</p> <p>A minor issue was ensuring that the contacts menu popup was also shown when clicking on a mention. Since Nextcloud 12 clicking on a user name shows a popup enabling to email or call the person, for example. The At.js plugin brought it's own set of CSS, which was adopted to our styling and so theming is fully supported. Eventually, a switch needed to be flipped so the plugin would not sort the already sorted results.</p> <p>The backend- and frontend adaptions were merged with <a href="https://github.com/nextcloud/server/pull/6982">pull request 6982</a>. A challenge that was left was making the retrieval of users from LDAP a lot faster to be acceptable. It was crucial for adoption and I made sure to have it solve before asking for final reviews.</p> <h3>Speed up the LDAP Backend</h3> <p>My strategy was to figure out what the bottleneck is, resolve it, measure and compare the effects of the changes and polish them. For the analysis part I was using the <a href="http://www.arthur-schiwon.de/href">xdebug profiler</a> for data collection and <a href="https://kcachegrind.github.io/html/Home.html">KCachegrind</a> for visualization. The hog was quickly uncovered.</p> <p>This requires a bit of explanation how the LDAP backend works when retrieving users. If not cached, we reach out to the LDAP server and inquire for the users matching the search term, based on a configured filter. The users receive an ID internal to Nextcloud based on an LDAP attribute, by default the entry's UUID. The ID is mapped together with the Distinguished Name (DN) for quick interaction and the UUID to detect DN changes. Depending on the configuration several other attributes are read and used in Nextcloud, for instance the email address, the photo or the quota. Since the records are read anyways, we also request most of these attributes (the inexpensive ones) with any search operation and apply them. Doing this is on each (non-cached) request is the hog.</p> <p>Having the email up front, for instance, is important so that share notifications can be sent as soon as someone wants to share a file or folder with that person. So we cannot really skip that, but when we know that the user is already mapped, we do not need to update these features now. We can move it to a background job. Splitting off updating the features already did the trick! Provided that the users requested are already known, which is a matter of time.</p> <p>In order to measure the before- and after states, first I prepared my dev instance. I connected to a redis cache via local socket, and ensured that before every run the cache is flushed. Unnecessary applications were closed. The command to measure was a command line call to search the LDAP server on Nextcloud, but the third batch of 500 for a provided search term: <code class="inline">time sudo -u http ./occ ldap:search --limit=500 --offset=1000 "ha"</code>. I was running this ten times for the old state, an intermediate and the final state and went from averaging 14.7 seconds via 3.5s to finally 1.8s. This suffices. For auto-completion we request the first 10 results, a significantly lighter task.</p> <p>Now we have a background job, which self-determines its run interval (in ranges) depending on the amount of known LDAP users, which iterates over the LDAP users fitting to the corresponding filter, mapping and updating their features. This kicks in only, if the background job mode is not set to "ajax" to avoid weird effects if it is triggered by the browser. Any serious setup should have the background jobs ran through cron. Also, it runs at least one hour after the last config change to interfere with setting up LDAP connections.</p> <h3>So, where are we?</h3> <p>Well, this feature is already merged in master which will become Nextcloud 13. Beta versions are already released and <a href="https://nextcloud.com/blog/nextcloud-13-beta-3-ready-for-your-testing-go-and-win-a-t-shirt/">ready for testing</a>. Some smaller issues were identified and fix. Currently there's also a <a href="https://github.com/nextcloud/server/pull/7514">discussion on whether avatars should appear in the comments text</a>, or text-only is favourable. Either way, I am really happy to have this long-lasting item done and out. In a whole I am really satisfied with and looking for the 13 release! </p> <p>This work has benefited from many collaborators, special thanks in random order to <a href="https://github.com/jancborchardt">Jan</a>, <a href="https://github.com/daita">Maxence</a>, <a href="https://github.com/nickvergessen">Joas</a>, <a href="https://github.com/BernhardPosselt">Bernhard</a>, <a href="https://github.com/schiessle">Björn</a>, <a href="https://github.com/rullzer">Roeland</a> and whoever I might have forgotten.</p></div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above"><div class="field-label">Tags: </div><div class="field-items"><div class="field-item even"><a href="http://www.arthur-schiwon.de/tags/nextcloud">Nextcloud</a></div><div class="field-item odd"><a href="http://www.arthur-schiwon.de/tags/planetubuntu">PlanetUbuntu</a></div><div class="field-item even"><a href="http://www.arthur-schiwon.de/tags/php">PHP</a></div></div></div></description> <pubDate>Wed, 20 Dec 2017 19:37:31 +0000</pubDate></item><item> <title>Costales: Ubucon Europe 2018: Call for papers</title> <guid isPermaLink="false">tag:blogger.com,1999:blog-2815405804906508978.post-3008183667280943104</guid> <link>http://thinkonbytes.blogspot.com/2017/12/ubuntu-europe-2018-call-for-papers.html</link> <description>Yes! It's the time for <a href="http://ubucon.org/en/events/ubucon-europe/call-for-papers/" target="_blank">submit a conference, workshop, podcast and/or stand</a> for the next <a href="http://ubucon.eu/" target="_blank">Ubucon Europe 2018</a> |o/<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-RU4FqRLhkek/WjKr5ezMBvI/AAAAAAAAQS8/fJSk9uir4J0n4WT_2HmWkznXL9bPpe5SQCLcBGAs/s1600/ubuntu-forum.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="160" data-original-width="149" src="https://3.bp.blogspot.com/-RU4FqRLhkek/WjKr5ezMBvI/AAAAAAAAQS8/fJSk9uir4J0n4WT_2HmWkznXL9bPpe5SQCLcBGAs/s1600/ubuntu-forum.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Spread your knowledge to hundred of people!</td></tr></tbody></table><br />We would love to hear you!<br /><div><br /></div><div><a href="http://ubucon.eu/" target="_blank">+ info</a>.</div></description> <pubDate>Wed, 20 Dec 2017 17:30:27 +0000</pubDate> <author>[email protected] (Marcos Costales)</author></item><item> <title>Colin Watson: An odd test failure</title> <guid isPermaLink="false">tag:www.chiark.greenend.org.uk,2017-12-19:~cjwatson/blog/odd-test-failure.html</guid> <link>https://www.chiark.greenend.org.uk/~cjwatson/blog/odd-test-failure.html</link> <description><p>Weird test failures are great at teaching you things that you didn’t realiseyou might need to know.</p><p><a href="https://www.chiark.greenend.org.uk/~cjwatson/blog/mysterious-bug-with-twisted-plugins.html">As previouslymentioned</a>, I’ve beenworking on converting Launchpad from <a href="http://www.buildout.org/">Buildout</a> to<a href="https://virtualenv.pypa.io/en/stable/">virtualenv</a> and<a href="https://pip.pypa.io/en/stable/">pip</a>, and I finally landed that change onour development branch today. The final landing was mostly quite smooth,except for one test failure on our buildbot that I hadn’t seen before:</p><div class="highlight"><pre><span class="x">ERROR: lp.codehosting.codeimport.tests.test_worker.TestBzrSvnImport.test_stacked</span><span class="x">worker ID: unknown worker (bug in our subunit output?)</span><span class="x">----------------------------------------------------------------------</span><span class="gt">Traceback (most recent call last):</span><span class="gr">_StringException</span>: <span class="n">log: {{{</span><span class="x">36.384 creating repository in file:///tmp/testbzr-6CwSLV.tmp/lp.codehosting.codeimport.tests.test_worker.TestBzrSvnImport.test_stacked/work/stacked-on/.bzr/.</span><span class="x">36.388 creating branch &lt;bzrlib.branch.BzrBranchFormat7 object at 0xeb85b36c&gt; in file:///tmp/testbzr-6CwSLV.tmp/lp.codehosting.codeimport.tests.test_worker.TestBzrSvnImport.test_stacked/work/stacked-on/</span><span class="x">}}}</span> <span class="gt">Traceback (most recent call last):</span> File <span class="nb">"/srv/buildbot/lpbuildbot/lp-devel-xenial/build/lib/lp/codehosting/codeimport/tests/test_worker.py"</span>, line <span class="m">1108</span>, in <span class="n">test_stacked</span> <span class="n">stacked_on</span><span class="o">.</span><span class="n">fetch</span><span class="p">(</span><span class="n">Branch</span><span class="o">.</span><span class="n">open</span><span class="p">(</span><span class="n">source_details</span><span class="o">.</span><span class="n">url</span><span class="p">))</span> File <span class="nb">"/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/branch.py"</span>, line <span class="m">186</span>, in <span class="n">open</span> <span class="n">possible_transports</span><span class="o">=</span><span class="n">possible_transports</span><span class="p">,</span> <span class="n">_unsupported</span><span class="o">=</span><span class="n">_unsupported</span><span class="p">)</span> File <span class="nb">"/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/controldir.py"</span>, line <span class="m">689</span>, in <span class="n">open</span> <span class="n">_unsupported</span><span class="o">=</span><span class="n">_unsupported</span><span class="p">)</span> File <span class="nb">"/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/controldir.py"</span>, line <span class="m">718</span>, in <span class="n">open_from_transport</span> <span class="n">find_format</span><span class="p">,</span> <span class="n">transport</span><span class="p">,</span> <span class="n">redirected</span><span class="p">)</span> File <span class="nb">"/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/transport/__init__.py"</span>, line <span class="m">1719</span>, in <span class="n">do_catching_redirections</span> <span class="k">return</span> <span class="n">action</span><span class="p">(</span><span class="n">transport</span><span class="p">)</span> File <span class="nb">"/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/controldir.py"</span>, line <span class="m">706</span>, in <span class="n">find_format</span> <span class="n">probers</span><span class="o">=</span><span class="n">probers</span><span class="p">)</span> File <span class="nb">"/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/controldir.py"</span>, line <span class="m">1155</span>, in <span class="n">find_format</span> <span class="k">raise</span> <span class="n">errors</span><span class="o">.</span><span class="n">NotBranchError</span><span class="p">(</span><span class="n">path</span><span class="o">=</span><span class="n">transport</span><span class="o">.</span><span class="n">base</span><span class="p">)</span><span class="gr">NotBranchError</span>: <span class="n">Not a branch: "/tmp/tmpdwqrc6/trunk/".</span></pre></div> <p>When I investigated this locally, I found that I could reproduce it if I ranjust that test on its own, but not if I ran it together with the other testsin the same class. That’s certainly my favourite way round for testisolation failures to present themselves (it’s more usual to find state fromone test leaking out and causing another one to fail, which can make for avery time-consuming exercise of trying to find the critical combination),but it’s still pretty odd.</p><p>I stepped through the <code>Branch.open</code> call in each case in the hope of someenlightenment. The interesting difference was that the custom probersinstalled by the <code>bzr-svn</code> plugin weren’t installed when I ran that one teston its own, so it was trying to open a branch as a Bazaar branch rather thanusing the foreign-branch logic for Subversion, and this presumably dependedon some configuration that only some tests put in place. I was on the vergeof just explicitly setting up that plugin in the test suite’s <code>setUp</code>method, but I was still curious about exactly what was breaking this.</p><p>Launchpad installs several Bazaar plugins, and<code>lib/lp/codehosting/__init__.py</code> is responsible for putting most of these inplace: anything in Launchpad itself that uses Bazaar is generally supposedto do something like <code>import lp.codehosting</code> to set everything up. Itherefore put a breakpoint at the top of <code>lp.codehosting</code> and steppedthrough it to see whether anything was going wrong in the initial setup.Sure enough, I found that <code>bzrlib.plugins.svn</code> was failing to import due toan exception raised by <code>bzrlib.i18n.load_plugin_translations</code>, which wasbeing swallowed silently but meant that its custom probers weren’t beinginstalled. Here’s what that function looks like:</p><div class="highlight"><pre><span class="k">def</span> <span class="nf">load_plugin_translations</span><span class="p">(</span><span class="n">domain</span><span class="p">):</span> <span class="sd">"""Load the translations for a specific plugin.</span> <span class="sd"> :param domain: Gettext domain name (usually 'bzr-PLUGINNAME')</span><span class="sd"> """</span> <span class="n">locale_base</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">dirname</span><span class="p">(</span> <span class="nb">unicode</span><span class="p">(</span><span class="n">__file__</span><span class="p">,</span> <span class="n">sys</span><span class="o">.</span><span class="n">getfilesystemencoding</span><span class="p">()))</span> <span class="n">translation</span> <span class="o">=</span> <span class="n">install_translations</span><span class="p">(</span><span class="n">domain</span><span class="o">=</span><span class="n">domain</span><span class="p">,</span> <span class="n">locale_base</span><span class="o">=</span><span class="n">locale_base</span><span class="p">)</span> <span class="n">add_fallback</span><span class="p">(</span><span class="n">translation</span><span class="p">)</span> <span class="k">return</span> <span class="n">translation</span></pre></div> <p>In this case, <code>sys.getfilesystemencoding</code> was returning <code>None</code>, which isn’ta valid <code>encoding</code> argument to <code>unicode</code>. But why would that be? It gaveme a sensible result when I ran it from a Python shell in this environment.A bit of head-scratching later and it occurred to me to look at a backtrace:</p><div class="highlight"><pre>(Pdb) bt /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/env/lib/python2.7/site.py(703)&lt;module&gt;()-&gt; main() /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/env/lib/python2.7/site.py(694)main()-&gt; execsitecustomize() /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/env/lib/python2.7/site.py(548)execsitecustomize()-&gt; import sitecustomize /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/env/lib/python2.7/sitecustomize.py(7)&lt;module&gt;()-&gt; lp_sitecustomize.main() /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/lib/lp_sitecustomize.py(193)main()-&gt; dont_wrap_bzr_branch_classes() /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/lib/lp_sitecustomize.py(139)dont_wrap_bzr_branch_classes()-&gt; import lp.codehosting&gt; /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/lib/lp/codehosting/__init__.py(54)&lt;module&gt;()-&gt; load_plugins([_get_bzr_plugins_path()])</pre></div> <p>I wonder if there’s something interesting about being imported from a<code>sitecustomize</code> hook? Sure enough, when I went to look at Python for where<code>sys.getfilesystemencoding</code> is set up, I found this in <code>Py_InitializeEx</code>:</p><div class="highlight"><pre> <span class="k">if</span> <span class="p">(</span><span class="o">!</span><span class="n">Py_NoSiteFlag</span><span class="p">)</span> <span class="n">initsite</span><span class="p">();</span> <span class="cm">/* Module site */</span> <span class="p">...</span><span class="cp">#if defined(Py_USING_UNICODE) &amp;&amp; defined(HAVE_LANGINFO_H) &amp;&amp; defined(CODESET)</span> <span class="cm">/* On Unix, set the file system encoding according to the</span><span class="cm"> user's preference, if the CODESET names a well-known</span><span class="cm"> Python codec, and Py_FileSystemDefaultEncoding isn't</span><span class="cm"> initialized by other means. Also set the encoding of</span><span class="cm"> stdin and stdout if these are terminals, unless overridden. */</span> <span class="k">if</span> <span class="p">(</span><span class="o">!</span><span class="n">overridden</span> <span class="o">||</span> <span class="o">!</span><span class="n">Py_FileSystemDefaultEncoding</span><span class="p">)</span> <span class="p">{</span> <span class="p">...</span> <span class="p">}</span></pre></div> <p>I <a href="https://code.launchpad.net/~cjwatson/launchpad/avoid-importing-bzr-plugins-from-site/+merge/335379">moved this out ofsitecustomize</a>,and it’s working better now. But did you know that a <code>sitecustomize</code> hookcan’t safely use anything that depends on <code>sys.getfilesystemencoding</code>? Icertainly didn’t, until it bit me.</p></description> <pubDate>Tue, 19 Dec 2017 13:52:52 +0000</pubDate></item><item> <title>Ted Gould: Net change</title> <guid isPermaLink="true">https://gould.cx/ted/blog/2017/12/19/Net-Change/</guid> <link>https://gould.cx/ted/blog/2017/12/19/Net-Change/</link> <description><p>Recently the <a href="https://www.npr.org/sections/thetwo-way/2017/12/14/570526390/fcc-repeals-net-neutrality-rules-for-internet-providers">FCC voted down the previously held rules on net neutrality</a>. I think that this is a bad decision by the FCC, but I don't think that it will result in the amount of chaos that some people are suggesting. I thought I'd write about how I see the net changing, for better or worse, with these regulations removed.</p> <p>If we think about how the Internet is today, basically everyone pays to access the network individually. Both groups that want to host information and people who want to access those sites. Everyone pays a fee for 'their connection' which contributes to companies that create and connect the backbone together. An Internet connection by itself has very little value, but it is the definition of a "network effect", because everyone is on the Internet it has value for you to connect there as well. Some services you connect to use a lot of your home Internet connection, and some of them charge different rates for it. Independent of how much they use or charge you, your ISP isn't involved in any meaningful way. The key change here is that now your ISP will be associated with the services that you use.</p> <p>Let's talk about a theoretical video streaming service that charged for their video service. Before they'd charge something like $10 a month for licensing and their hosting costs. Now they're going to end up paying an access fee to get to consumer's Internet connections, so their charges are going to change. They end up charging $20 a month and giving $10 of that to the ISPs of their customers. In the end consumers will end up paying for their Internet connection just as much, but it'd be bundled into other services they're buying on the Internet. ISPs love this because suddenly they're not the ones charging too much, they're out of the billing here. They could even possibly charge less (free?) for home Internet access as it'd be subsidized by the services you use.</p> <h3 id="better-connections">Better connections</h3> <p>I think that it is quite possible that this could result in better Internet connections for a large number of households. Today those households have mediocre connectivity, and they can complain about it, but for the most part ISPs don't care about a few individuals complaints. What could change is that when a large company is paying millions of dollars in access fees is complaining, they might start listening.</p> <p>The ISPs are supporting the removal of Net Neutrality regulations to get money from the services on the Internet. I don't think that they realize that with that money will come an obligation to perform to those service's requirements. Most of those services are more customer focused than ISPs are, which is likely to cause a culture shock once they hold weight with their management. I think it is likely ISPs will come to regret not supporting net neutrality.</p> <h3 id="expensive-hosting-for-independent-and-smaller-providers">Expensive hosting for independent and smaller providers</h3> <p>It is possible for large services on the Internet to negotiate contracts with large ISPs and make everything generally work out so that most consumers don't notice. There is then a reasonable question on how providers that are too small to negotiate a contract play in this environment. I think it is likely that the hosting providers will fill in this gap with different plans that match a level of connectivity. You'll end up with more versions of that "small" instance, some with consumer bandwidth built-in to the cost and others without. There may also be mirroring services like CDNs that have group negotiated rates with various ISPs. The end result is that hosting will get more expensive for small businesses.</p> <p>The bundling of bandwidth is also likely to shake up the cloud hosting business. While folks like Amazon and Google have been able to dominate costs through massive datacenter buys, suddenly that isn’t the only factor. It seems likely the large ISPs will build public clouds of their own as they can compete by playing funny-money with the bandwidth charges.</p> <p>Increased hosting costs will hurt large non-profits the most, folks like Wikipedia and The Internet Archive. They already have a large amount of their budget tied up in hosting and increasing that is going to make their finances difficult. Ideally ISPs and other Internet companies would help by donating to these amazing projects, but that's probably too optimistic. We'll need individuals to make up this gap. These organizations could be the real victims of not having net neutrality.</p> <h3 id="digital-divide">Digital Divide</h3> <p>A potential gain would be that, if ISPs are getting most of the money from services, the actual connections could become very cheap. There would then be potential for more lower-income families to get access to the Internet as a whole. While this is possible, the likelihood would be that only families in regions that have customers the end-services themselves want. It will help those who are near an affluent area, not everyone. It seems that there is some potential for gain, but I don't believe it will end up being a large impact.</p> <h3 id="what-can-i-do">What can I do?</h3> <p>If you're a consumer, there's probably not a lot, you're along for the ride. You can contact your representatives, and if this is a world that you don't like the sound of, ask them to change it. Laws are a social contract for how our society works, make sure they're a contract you want to be part of.</p> <p>As a developer of a web service you can make sure that your deployment is able to work on multi-cloud type setups. You're probably going to end up going from multi-cloud to a whole-lotta-cloud as each has bandwidth deals your business is interested in. Also, make sure you can isolate which parts need the bandwidth and which don't as that may become more important moving forward.</p></description> <pubDate>Tue, 19 Dec 2017 00:00:00 +0000</pubDate></item><item> <title>Colin Watson: Kitten Block equivalent for Firefox 57</title> <guid isPermaLink="false">tag:www.chiark.greenend.org.uk,2017-12-19:~cjwatson/blog/kitten-block-equivalent-for-firefox-57.html</guid> <link>https://www.chiark.greenend.org.uk/~cjwatson/blog/kitten-block-equivalent-for-firefox-57.html</link> <description><p>I’ve been using <a href="https://addons.mozilla.org/en-US/firefox/addon/kitten-block/">KittenBlock</a> foryears, since I don’t really need the blood pressure spike caused byaccidentally following links to certain <span class="caps">UK</span> newspapers. Unfortunately ithasn’t been ported to Firefox 57. I tried emailing the author a couple ofmonths ago, but my email bounced.</p><p>However, if your primary goal is just to block the websites in questionrather than seeing kitten pictures as such (let’s face it, the internet isnot short of alternative sources of kitten pictures), then it’s easy to dowith <a href="https://addons.mozilla.org/en-GB/firefox/addon/ublock-origin/">uBlockOrigin</a>.After installing the extension if necessary, go to Tools → Add-ons →Extensions → uBlock Origin → Preferences → My filters, and add<code>www.dailymail.co.uk</code> and <code>www.express.co.uk</code>, each on its own line. (Ofcourse you can easily add more if you like.) Voilà: instant tranquility.</p><p>Incidentally, this also works fine on Android. The fact that it was easy toinstall a good ad blocker without having to mess about with a rooted deviceor strange proxy settings was the main reason I switched to Firefox on my phone.</p></description> <pubDate>Tue, 19 Dec 2017 00:00:00 +0000</pubDate></item><item> <title>James Page: Ubuntu OpenStack Dev Summary – 18th December 2018</title> <guid isPermaLink="false">http://javacruft.wordpress.com/?p=1451</guid> <link>https://javacruft.wordpress.com/2017/12/18/ubuntu-openstack-dev-summary-18th-november-2018/</link> <description><p><span style="font-weight: 400;">Welcome to the Ubuntu OpenStack development summary!</span></p><p><span style="font-weight: 400;">This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.</span></p><p><span style="font-weight: 400;">If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!</span></p><h2><b>OpenStack Distribution</b></h2><h3><b>Stable Releases</b></h3><p><span style="font-weight: 400;">Current in-flight SRU’s for OpenStack related packages:</span></p><p style="padding-left: 30px;"><a href="https://bugs.launchpad.net/bugs/1728576">Ceph 12.2.1</a></p><p style="padding-left: 30px;"><a href="https://bugs.launchpad.net/bugs/1724622">OpenvSwitch 2.8.1</a></p><p class="yui3-editable_text-content" id="edit-title" style="padding-left: 30px;"><a href="https://bugs.launchpad.net/bugs/1715254"><span class="yui3-editable_text-text ellipsis" id="yui_3_10_3_1_1513599021122_51">nova-novncproxy process gets wedged, requiring kill -HUP</span></a></p><p style="padding-left: 30px;"><a href="https://bugs.launchpad.net/cloud-archive/+bug/1681073">Horizon Cinder Consistency Groups</a></p><p><span style="font-weight: 400;">Recently released SRU’s for OpenStack related packages:</span></p><p style="padding-left: 30px;"><a href="https://bugs.launchpad.net/bugs/1735691">Percona XtraDB Cluster Security Updates</a></p><p style="padding-left: 30px;"><a href="https://bugs.launchpad.net/bugs/1734990">Pike Stable Releases</a></p><p style="padding-left: 30px;"><a href="https://bugs.launchpad.net/bugs/1736149">Ocata Stable Releases</a></p><p style="padding-left: 30px;"><a href="https://bugs.launchpad.net/cloud-archive/+bug/1706566">Ceph 10.2.9</a></p><h3><b>Development Release</b></h3><p><span style="font-weight: 400;">Since the last dev summary, OpenStack Queens Cloud Archive pockets have been setup and have received package updates for the first and second development milestones – you can install them on Ubuntu 16.04 LTS using:</span></p><pre>sudo add-apt-repository cloud-archive:queens[-proposed]</pre><p>OpenStack Queens will also form part of the Ubuntu 18.04 LTS release in April 2018, so alternatively you can try out OpenStack Queens using Ubuntu Bionic directly.</p><p>You can always test with up-to-date packages built from project branches from the Ubuntu OpenStack testing PPA’s:</p><pre>sudo add-apt-repository ppa:openstack-ubuntu-testing/queens</pre><h2><b>Nova LXD</b></h2><p>No significant feature work to report on since the last dev summary.</p><p>The OpenStack Ansible team have contributed an additional functional gate for nova-lxd – its currently non-voting, but does provide some additional testing feedback for nova-lxd developers during the code review process. If it proves stable and useful, we’ll make this a voting check/gate.</p><h2><b>OpenStack Charms</b></h2><h3>Ceph charm migration</h3><p>Since the last development summary, the Charms team released the 17.11 set of stable charms; this includes a migration path for users of the deprecated ceph charm to using ceph-mon and ceph-osd. For full details on this process checkout the <a href="https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-ceph-migration.html">charm deployment guide</a>.</p><h3><b>Queens Development<br /></b></h3><p>As part of the 17.11 charm release a number of charms switched to execution of charm hooks under Python 3 – this includes the nova-compute, neutron-{api,gateway,openvswitch}, ceph-{mon,osd} and heat charms; once these have had some battle testing, we’ll focus on migrating the rest of the charm set to Python 3 as well.</p><p>Charm changes to support the second Queens milestone (mainly in ceilometer and keystone) and Ubuntu Bionic are landing into charm development to support ongoing testing during the development cycle. OpenStack Charm deployments for Queens and later will default to using the Keystone v3 API (v2 has been removed as of Queens). Telemetry users must deploy Ceilometer with Gnocchi and Aodh as the Ceilometer API has now been removed from charm based deployments and from the Ceilometer codebase. You can install the current tip of charm development using the the openstack-charmers-next prefix for charmstore URL’s – for example:</p><pre>juju deploy cs:~openstack-charmers-next/neutron-api</pre><p>ZeroMQ support has been dropped from the charms; having no know users and no functional testing in the gate and having issued deprecation warnings in release notes it was time to drop the associated code from the code base. PostgreSQL and deploy from source are also expected to be removed from the charms this cycle.</p><p>You can read the full list of specs currently scheduled for Queens <a href="http://specs.openstack.org/charm-specs/specs/queens/index.html">here</a>.</p><h3><b>Releases</b></h3><p>The last stable charm release went out at the end of November including the first stable release of the Gnocchi charm – you can read the full details in the <a href="https://docs.openstack.org/charm-guide/latest/1711.html">release notes</a>. The next stable charm release will take place in February alongside OpenStack Queens, with a release shortly after the Ubuntu 18.04 LTS release in May to sweep up any pending LTS support and fixes needed.</p><h3><b>IRC (and meetings)</b></h3><p><span style="font-weight: 400;">As always, you can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see <a href="http://eavesdrop.openstack.org/#OpenStack_Charms" rel="nofollow">http://eavesdrop.openstack.org/#OpenStack_Charms</a> for more details. The next IRC meeting will be on the 8th of January at 1700 UTC.<br /></span></p><p><span style="font-weight: 400;">And finally – Merry Christmas!<br /></span></p><p>EOM</p><p> </p><br /> <a href="http://feeds.wordpress.com/1.0/gocomments/javacruft.wordpress.com/1451/" rel="nofollow"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/comments/javacruft.wordpress.com/1451/" /></a> <img alt="" border="0" height="1" src="https://pixel.wp.com/b.gif?host=javacruft.wordpress.com&amp;blog=16060086&amp;post=1451&amp;subd=javacruft&amp;ref=&amp;feed=1" width="1" /></description> <pubDate>Mon, 18 Dec 2017 16:18:21 +0000</pubDate></item><item> <title>Serge Hallyn: Pockyt and edbrowse</title> <guid isPermaLink="false">http://s3hh.wordpress.com/?p=574</guid> <link>https://s3hh.wordpress.com/2017/12/16/pockyt-and-edbrowse/</link> <description><p>I use <a href="https://s3hh.wordpress.com/2013/09/13/rss-over-pocket/">r2e and pocket</a> to follow tech related rss feeds. To read these I sometimes use the nook, sometimes use the <a href="http://getpocket.com">pocket website</a>, but often I use <a href="http://edbrowse.org">edbrowse</a> and <a href="https://pypi.python.org/pypi/pockyt">pockyt</a> on a terminal. I tend to prefer this because I can see more entries more quickly, delete them en masse, use the terminal theme already set for the right time of day (dark and light for night/day), and just do less clicking.</p><p>My .ebrc has the following:</p><pre># pocket getfunction+pg {e1!pockyt get -n 40 -f '{id}: {link} - {excerpt}' -r newest -o ~/readitlater.txt &gt; /dev/null 2&gt;&amp;1e98e ~/readitlater.txt1,10n} # pocket deletefunction+pd {!awk -F: '{ print $1 }' ~/readitlater.txt &gt; ~/pocket.txt!pockyt mod -d -i ~/pocket.txt}</pre><p>It’s not terribly clever, but it works – both on linux and macos. To use these, I start up edbrowse, and type &lt;pg. This will show me the latest 10 entries. Any which I want to keep around, I delete (5n). Any which I want to read, I open (4g) and move to a new workspace (M2).</p><p>When I'm done, any references which I want deleted are still in ~/readitlater.txt. Those which I won't to keep, are deleted from that file. (Yeah a bit backwards from normal <img alt="🙂" class="wp-smiley" src="https://s0.wp.com/wp-content/mu-plugins/wpcom-smileys/twemoji/2/72x72/1f642.png" style="height: 1em;" /> ) At that point I make sure to save (w), then run &lt;pd to delete them from pocket.</p><h3>Disclaimer</h3><p>The opinions expressed in this blog are my own views and not those of Cisco.</p><br /> <a href="http://feeds.wordpress.com/1.0/gocomments/s3hh.wordpress.com/574/" rel="nofollow"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/comments/s3hh.wordpress.com/574/" /></a> <img alt="" border="0" height="1" src="https://pixel.wp.com/b.gif?host=s3hh.wordpress.com&amp;blog=14017495&amp;post=574&amp;subd=s3hh&amp;ref=&amp;feed=1" width="1" /></description> <pubDate>Sat, 16 Dec 2017 05:18:43 +0000</pubDate></item><item> <title>Clive Johnston: Love KDE software? Show your love by donating today</title> <guid isPermaLink="false">https://clivejo.com/?p=464</guid> <link>https://clivejo.com/love-kde-software-show-your-love-by-donating-today/</link> <description><p>It is the season of giving and if you use KDE software, donate to KDE. Software such as Krita, Kdenlive, KDE Connect, Kontact, digiKam, the Plasma desktop and many many more are all projects under the KDE umbrella.</p><p style="text-align: center;"><a href="https://clivejo.com/wp-content/uploads/2017/12/I_love_KDE.png"><img alt="" class="alignnone size-full wp-image-469" height="251" src="https://clivejo.com/wp-content/uploads/2017/12/I_love_KDE.png" width="194" /></a></p><p>KDE have launched a fund drive running until the end of 2017. If you want to help make KDE software better, please consider donating. For more information on what KDE will do with any money you donate, please go to <a href="https://www.kde.org/fundraisers/yearend2017/">https://www.kde.org/fundraisers/yearend2017/</a></p></description> <pubDate>Fri, 15 Dec 2017 22:16:21 +0000</pubDate></item><item> <title>Matthew Helmke: Learn Java the Easy Way</title> <guid isPermaLink="false">https://matthewhelmke.net/?p=2127</guid> <link>https://matthewhelmke.net/2017/12/learn-java-the-easy-way/</link> <description><p>This is an enjoyable introduction to programming in Java by an author I have enjoyed in the past.</p><p><a href="https://www.nostarch.com/learnjava"><img alt="" class="alignleft size-thumbnail wp-image-2077" height="150" src="https://www.nostarch.com/sites/default/files/styles/uc_product/public/LearnJavatheEasyWay_cover.png?itok=6Osi2XjJ" width="128" /></a></p><p><a href="https://www.nostarch.com/learnjava">Learn Java the Easy Way: A Hands-On Introduction to Programming</a> was written by Dr. Bryson Payne. I previously reviewed his book <a href="https://matthewhelmke.net/2015/05/teach-your-kids-to-code/">Teach Your Kids to Code</a>, which is Python-based.</p><p>Learn Java the Easy Way covers all the topics one would expect, from development IDEs (it focuses heavily on Eclipse and Android Studio, which are both reasonable, solid choices) to debugging. In between, the reader receives clear explanations of how to perform calculations, manipulate text strings, use conditions and loops, create functions, along with solid and easy-to-understand definitions of important concepts like classes, objects, and methods.</p><p>Java is taught systematically, starting with simple and moving to complex. We first create a simple command-line game, then we create a GUI for it, then we make it into an Android app, then we add menus and preference options, and so on. Along the way, new games and enhancement options are explored, some in detail and some in end-of-chapter exercises designed to give more confident or advancing students ideas for pushing themselves further than the book’s content. I like that.</p><p>Side note: I was pleasantly amused to discover that the first program in the book is the same as one that I originally wrote in 1986 on a first-generation Casio graphing calculator, so I would have something to kill time when class lectures got boring.</p><p>The pace of the book is good. Just as I began to feel done with a topic, the author moved to something new. I never felt like details were skipped and I also never felt like we were bogged down with too much detail, beyond what is needed for the current lesson. The author has taught computer science and programming for nearly 20 years, and it shows.</p><p>Bottom line: if you want to learn Java, this is a good introduction that is clearly written and will give you a nice foundation upon which you can build.</p><p><span style="font-size: 11px;"><a href="http://matthewhelmke.net/2009/10/14/do-i-dare-review-more-books/" rel="noopener" target="_blank">Disclosure</a>: I was given my copy of this book by the publisher as a review copy. See also: <a href="http://matthewhelmke.net/2012/10/are-all-book-reviews-positive/">Are All Book Reviews Positive?</a></span></p></description> <pubDate>Fri, 15 Dec 2017 15:53:49 +0000</pubDate></item><item> <title>Raphaël Hertzog: Freexian’s report about Debian Long Term Support, November 2017</title> <guid isPermaLink="false">https://raphaelhertzog.com/?p=3658</guid> <link>https://raphaelhertzog.com/2017/12/15/freexians-report-about-debian-long-term-support-november-2017/</link> <description><p><img alt="A Debian LTS logo" class="alignright size-full wp-image-3226" height="128" src="https://raphaelhertzog.com/files/2015/03/Debian-LTS-2-small.png" width="128" />Like <a href="https://raphaelhertzog.com/tag/Freexian+LTS/">each month</a>, here comes a report about the work of <a href="http://www.freexian.com/services/debian-lts.html">paid contributors</a> to <a href="https://wiki.debian.org/LTS">Debian LTS</a>.</p><h3>Individual reports</h3><p>In October, about 144 work hours have been dispatched among 12 paid contributors. Their reports are available:</p><ul><li><a href="https://anarc.at/blog/2017-11-30-free-software-activities-november-2017/">Antoine Beaupré</a> did 8.5h (out of 13h allocated + 3.75h remaining, thus keeping 8.25h for December).</li><li><a href="https://www.decadent.org.uk/ben/blog/debian-lts-work-november-2017.html">Ben Hutchings</a> did 17 hours (out of 13h allocated + 4 extra hours).</li><li><a href="https://lists.debian.org/debian-lts/2017/11/msg00087.html">Brian May</a> did 10 hours.</li><li><a href="https://chris-lamb.co.uk/posts/free-software-activities-in-november-2017">Chris Lamb</a> did 13 hours.</li><li><a href="https://lists.debian.org/debian-lts/2017/12/msg00044.html">Emilio Pozuelo Monfort</a> did 14.5 hours (out of 13 hours allocated + 15.25 hours remaining, thus keeping 13.75 hours for December).</li><li><a href="https://lists.debian.org/debian-lts/2017/12/msg00016.html">Guido Günther</a> did 14 hours (out of 11h allocated + 5.5 extra hours, thus keeping 2.5h for December).</li><li><a href="https://lists.debian.org/debian-lts/2017/12/msg00006.html">Hugo Lefeuvre</a> did 13h.</li><li>Lucas Kanashiro did not request any work hours, but he had 3 hours left. He did not publish any report yet.</li><li><a href="https://gambaru.de/blog/2017/12/06/my-free-software-activities-in-november-2017/">Markus Koschany</a> did 14.75 hours (out of 13 allocated + 1.75 extra hours).</li><li><a href="http://inguza.com/report/debian-long-term-support-work-2017-november">Ola Lundqvist</a> did 7h.</li><li><a href="https://raphaelhertzog.com/2017/12/03/my-free-software-activities-in-november-2017/">Raphaël Hertzog</a> did 10 hours (out of 12h allocated, thus keeping 2 extra hours for December).</li><li><a href="https://lists.debian.org/debian-lts/2017/12/msg00003.html">Roberto C. Sanchez</a> did 32.5 hours (out of 13 hours allocated + 24.50 hours remaining, thus keeping 5 extra hours for November).</li><li><a href="http://blog.alteholz.eu/2017/12/my-debian-activities-in-november-2017/">Thorsten Alteholz</a> did 13 hours.</li></ul><h3>About external support partners</h3><p>You might notice that there is sometimes a significant gap between the number of distributed work hours each month and the number of sponsored hours reported in the “Evolution of the situation” section. This is mainly due to some work hours that are “externalized” (but also because some sponsors pay too late). For instance, since we don’t have Xen experts among our Debian contributors, we rely on <a href="https://credativ.com">credativ</a> to do the Xen security work for us. And when we get an invoice, we convert that to a number of hours that we drop from the available hours in the following month. And in the last months, Xen has been a significant drain to our resources: 35 work hours made in September (invoiced in early October and taken off from the November hours detailed above), 6.25 hours in October, 21.5 hours in November. We also have a similar partnership with Diego Bierrun to help us maintain libav, but here the number of hours tend to be very low.</p><p>In both cases, the work done by those paid partners is made freely available for others under the original license: credativ maintains a <a href="https://github.com/credativ/xen-lts/">Xen 4.1 branch on GitHub</a>, Diego commits his work on the <a href="https://git.libav.org/?p=libav.git;a=shortlog;h=refs/heads/release/0.8">release/0.8 branch in the official git repository</a>.</p><h3>Evolution of the situation</h3><p>The <a href="https://www.freexian.com/services/debian-lts.html">number of sponsored hours</a> did not change at 183 hours per month. It would be nice if we could continue to find new sponsors as the amount of work seems to be slowly growing too.</p><p>The <a href="https://security-tracker.debian.org/tracker/status/release/oldstable">security tracker</a> currently lists 55 packages with a known CVE and the <a href="https://anonscm.debian.org/viewvc/secure-testing/data/dla-needed.txt?view=markup">dla-needed.txt file</a> 35 (we’re a bit behind in CVE triaging apparently).</p><h3>Thanks to our sponsors</h3><p>New sponsors are in bold.</p><ul><li>Platinum sponsors:</li><ul><li><a href="http://www.toshiba.co.jp/worldwide/index.html">TOSHIBA</a> (for 26 months)</li><li><a href="https://github.com">GitHub</a> (for 17 months)</li></ul><li>Gold sponsors:</li><ul><li><a href="http://www.positive-internet.com">The Positive Internet</a> (for 42 months)</li><li><a href="http://www.blablacar.fr">Blablacar</a> (for 41 months)</li><li><a href="http://www.linode.com">Linode</a> (for 31 months)</li><li><a href="http://www.babiel.com">Babiel GmbH</a> (for 20 months)</li><li><a href="https://www.plathome.com">Plat’Home</a> (for 20 months)</li></ul><li>Silver sponsors:</li><ul><li><a href="http://www.domainnameshop.com">Domeneshop AS</a> (for 41 months)</li><li><a href="http://www.univ-lille3.fr">Université Lille 3</a> (for 41 months)</li><li><a href="http://trollweb.no">Trollweb Solutions</a> (for 39 months)</li><li><a href="http://www.nantesmetropole.fr/">Nantes Métropole</a> (for 35 months)</li><li><a href="https://www.dalenys.com">Dalenys</a> (for 32 months)</li><li><a href="http://www.univention.de">Univention GmbH</a> (for 27 months)</li><li><a href="http://portail.univ-st-etienne.fr/">Université Jean Monnet de St Etienne</a> (for 27 months)</li><li><a href="https://www.sonusnet.com">Sonus Networks</a> (for 21 months)</li><li><a href="https://maxcluster.de">maxcluster GmbH</a> (for 15 months)</li><li><a href="https://www.exonet.nl">Exonet B.V.</a> (for 11 months)</li><li><a href="https://www.lrz.de">Leibniz Rechenzentrum</a> (for 5 months)</li><li><a href="https://vente-privee.com">Vente-privee.com</a></li></ul><li>Bronze sponsors:</li><ul><li><a href="http://www.intars.at">David Ayers – IntarS Austria</a> (for 42 months)</li><li><a href="http://www.evolix.fr">Evolix</a> (for 42 months)</li><li><a href="http://www.offensive-security.com">Offensive Security</a> (for 42 months)</li><li><a href="http://www.seznam.cz">Seznam.cz, a.s.</a> (for 42 months)</li><li><a href="http://freeside.biz">Freeside Internet Service</a> (for 41 months)</li><li><a href="http://www.mytux.fr">MyTux</a> (for 41 months)</li><li><a href="http://intevation.de">Intevation GmbH</a> (for 39 months)</li><li><a href="http://linuxhotel.de">Linuxhotel GmbH</a> (for 39 months)</li><li><a href="https://daevel.fr">Daevel SARL</a> (for 37 months)</li><li><a href="http://bitfolk.com">Bitfolk LTD</a> (for 36 months)</li><li><a href="http://www.megaspace.de">Megaspace Internet Services GmbH</a> (for 36 months)</li><li><a href="http://www.greenbone.net">Greenbone Networks GmbH</a> (for 35 months)</li><li><a href="http://numlog.fr">NUMLOG</a> (for 35 months)</li><li><a href="http://www.wingo.ch/">WinGo AG</a> (for 35 months)</li><li><a href="http://lheea.ec-nantes.fr">Ecole Centrale de Nantes – LHEEA</a> (for 31 months)</li><li><a href="http://sig-io.nl">Sig-I/O</a> (for 28 months)</li><li><a href="https://www.entrouvert.com/">Entr’ouvert</a> (for 26 months)</li><li><a href="https://adfinis-sygroup.ch">Adfinis SyGroup AG</a> (for 23 months)</li><li><a href="http://www.allogarage.fr">GNI MEDIA</a> (for 18 months)</li><li><a href="http://www.legi.grenoble-inp.fr">Laboratoire LEGI – UMR 5519 / CNRS</a> (for 18 months)</li><li><a href="https://quarantainenet.nl">Quarantainenet BV</a> (for 18 months)</li><li><a href="https://www.rhx.it">RHX Srl</a> (for 15 months)</li><li><a href="http://bearstech.com">Bearstech</a> (for 9 months)</li><li><a href="http://lihas.de">LiHAS</a> (for 9 months)</li><li><a href="http://www.people-doc.com">People Doc</a> (for 6 months)</li><li><a href="http://www.catalyst.net.nz">Catalyst IT Ltd</a> (for 4 months)</li></ul></ul><p style="font-size: smaller;"><a href="https://raphaelhertzog.com/2017/12/15/freexians-report-about-debian-long-term-support-november-2017/#comments">No comment</a> | Liked this article? <a href="http://raphaelhertzog.com/support-my-work/">Click here</a>. | My blog is <a href="http://flattr.com/thing/26545/apt-get-install-debian-wizard">Flattr-enabled</a>.</p></description> <pubDate>Fri, 15 Dec 2017 14:15:21 +0000</pubDate></item><item> <title>Dimitri John Ledkov: What does FCC Net Neutrality repeal mean to you?</title> <guid isPermaLink="false">tag:blogger.com,1999:blog-347582618045055410.post-2027344606717920550</guid> <link>http://feedproxy.google.com/~r/tdlk/~3/Hj4fv9WOAMc/what-does-fcc-net-neutrality-repeal.html</link> <description><div dir="ltr" style="text-align: left;"><center><div dir="ltr" style="background-color: #f1f1f1; border-color: black; border-radius: 30px; border: 2px solid; padding: 10px; text-align: left; width: 400px;"><h1>Sorry, the web page you have requested is not available through your internet connection.</h1><h1 style="text-align: center;"><div style="font-family: Arial, sans-serif; font-size: 16px; font-weight: normal; line-height: 18.079999923706055px; margin-bottom: 18.08px; text-align: left;">We have received an order from the Courts requiring us to prevent access to this site in order to help protect against Lex Julia Majestatis infridgement.</div><hr style="font-family: Arial, sans-serif; font-size: 16px; font-weight: normal; line-height: 18.079999923706055px; text-align: left;" /><div>If you are a home broadband customer, for more information on why certain web pages are blocked, please click <a href="https://www.eff.org/deeplinks/content-blocking" style="color: #cc0000; text-decoration: none;" target="_blank" title="Home broadband">here</a>.</div><div>If you are a business customer, or are trying to view this page through your company's internet connection, please click <a href="https://www.eff.org/deeplinks/content-blocking" style="color: #cc0000; text-decoration: none;" target="_blank" title="Business">here</a>. <br /><div>∞ </div></div></h1></div></center></div><div class="feedflare"><a href="http://feeds.feedburner.com/~ff/tdlk?a=Hj4fv9WOAMc:z1_NLsP2TrQ:4cEx4HpKnUU"><img border="0" src="http://feeds.feedburner.com/~ff/tdlk?i=Hj4fv9WOAMc:z1_NLsP2TrQ:4cEx4HpKnUU" /></a> <a href="http://feeds.feedburner.com/~ff/tdlk?a=Hj4fv9WOAMc:z1_NLsP2TrQ:yIl2AUoC8zA"><img border="0" src="http://feeds.feedburner.com/~ff/tdlk?d=yIl2AUoC8zA" /></a> <a href="http://feeds.feedburner.com/~ff/tdlk?a=Hj4fv9WOAMc:z1_NLsP2TrQ:I9og5sOYxJI"><img border="0" src="http://feeds.feedburner.com/~ff/tdlk?d=I9og5sOYxJI" /></a> <a href="http://feeds.feedburner.com/~ff/tdlk?a=Hj4fv9WOAMc:z1_NLsP2TrQ:-BTjWOF_DHI"><img border="0" src="http://feeds.feedburner.com/~ff/tdlk?i=Hj4fv9WOAMc:z1_NLsP2TrQ:-BTjWOF_DHI" /></a> <a href="http://feeds.feedburner.com/~ff/tdlk?a=Hj4fv9WOAMc:z1_NLsP2TrQ:qj6IDK7rITs"><img border="0" src="http://feeds.feedburner.com/~ff/tdlk?d=qj6IDK7rITs" /></a> <a href="http://feeds.feedburner.com/~ff/tdlk?a=Hj4fv9WOAMc:z1_NLsP2TrQ:63t7Ie-LG7Y"><img border="0" src="http://feeds.feedburner.com/~ff/tdlk?d=63t7Ie-LG7Y" /></a></div><img alt="" height="1" src="http://feeds.feedburner.com/~r/tdlk/~4/Hj4fv9WOAMc" width="1" /></description> <pubDate>Fri, 15 Dec 2017 09:09:37 +0000</pubDate> <author>[email protected] (Dimitri John Ledkov)</author></item><item> <title>Sebastian Dröge: A GStreamer Plugin like the Rec Button on your Tape Recorder – A Multi-Threaded Plugin written in Rust</title> <guid isPermaLink="false">https://coaxion.net/blog/?p=505</guid> <link>https://coaxion.net/blog/2017/12/a-gstreamer-plugin-like-the-rec-button-on-your-tape-recorder-a-multi-threaded-plugin-written-in-rust/</link> <description><p>As <a href="https://www.rust-lang.org" rel="noopener" target="_blank">Rust</a> is known for <a href="https://blog.rust-lang.org/2015/04/10/Fearless-Concurrency.html" rel="noopener" target="_blank">“Fearless Concurrency”</a>, that is being able to write concurrent, multi-threaded code without fear, it seemed like a good fit for a <a href="https://gstreamer.freedesktop.org" rel="noopener" target="_blank">GStreamer</a> element that we had to write at <a href="https://centricular.com" rel="noopener" target="_blank">Centricular</a>.</p><p>Previous experience with Rust for writing (mostly) single-threaded GStreamer elements and applications (also multi-threaded) were all quite successful and promising already. And in the end, this new element was also a pleasure to write and probably faster than doing the equivalent in C. For the impatient, the <a href="https://github.com/sdroege/gst-plugin-rs/blob/master/gst-plugin-togglerecord/src/togglerecord.rs" rel="noopener" target="_blank">code</a>, <a href="https://github.com/sdroege/gst-plugin-rs/blob/master/gst-plugin-togglerecord/tests/tests.rs" rel="noopener" target="_blank">tests</a> and a <a href="https://www.gtk.org/" rel="noopener" target="_blank">GTK+</a> <a href="https://github.com/sdroege/gst-plugin-rs/blob/master/gst-plugin-togglerecord/examples/gtk_recording.rs" rel="noopener" target="_blank">example application</a> (written with the great <a href="http://gtk-rs.org" rel="noopener" target="_blank">Rust GTK bindings</a>, but the GStreamer element is also usable from C or any other language) can be found <a href="https://github.com/sdroege/gst-plugin-rs/tree/master/gst-plugin-togglerecord" rel="noopener" target="_blank">here</a>.</p><h4>What does it do?</h4><p>The main idea of the element is that it basically works like the rec button on your tape recorder. There is a single boolean property called “record”, and whenever it is set to <i>true</i> it will pass-through data and whenever it is set to <i>false</i> it will drop all data. But different to the existing <a href="https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer-plugins/html/gstreamer-plugins-valve.html" rel="noopener" target="_blank">valve</a> element, it</p><ul><li>Outputs a contiguous timeline without gaps, i.e. there are no gaps in the output when not recording. Similar to the recording you get on a tape recorder, you don’t have 10s of silence if you didn’t record for 10s.</li><li>Handles and synchronizes multiple streams at once. When recording e.g. a video stream and an audio stream, every recorded segment starts and stops with both streams at the same time</li><li>Is key-frame aware. If you record a compressed video stream, each recorded segment starts at a keyframe and ends right before the next keyframe to make it most likely that all frames can be successfully decoded</li></ul><p>The multi-threading aspect here comes from the fact that in GStreamer each stream usually has its own thread, so in this case the video stream and the audio stream(s) would come from different threads but would have to be synchronized between each other.</p><p>The GTK+ example application for the plugin is playing a video with the current playback time and a <i>beep</i> every second, and allows to record this as an MP4 file in the current directory.</p><h4>How did it go?</h4><p>This new element was again based on the <a href="https://github.com/sdroege/gstreamer-rs" rel="noopener" target="_blank">Rust GStreamer bindings</a> and the <a href="https://github.com/sdroege/gst-plugin-rs" rel="noopener" target="_blank">infrastructure</a> that I was writing over the last year or two for writing GStreamer plugins in Rust.</p><p>As written above, it generally went all fine and was quite a pleasure but there were a few things that seem noteworthy. But first of all, writing this in Rust was much more convenient and fun than writing it in C would’ve been, and I’ve written enough similar code in C before. It would’ve taken quite a bit longer, I would’ve had to debug more problems in the new code during development (there were actually surprisingly few things going wrong during development, I expected more!), and probably would’ve written less exhaustive tests because writing tests in C is just so inconvenient.</p><h5>Rust does not prevent deadlocks</h5><p>While this should be clear, and was also clear to myself before, this seems like it might need some reiteration. Safe Rust prevents data races, but not all possible bugs that multi-threaded programs can have. Rust is not magic, only a tool that helps you prevent some classes of potential bugs.</p><p>For example, you can’t just stop thinking about lock order if multiple mutexes are involved, or that you can carelessly use <a href="https://doc.rust-lang.org/std/sync/struct.Condvar.html" rel="noopener" target="_blank">condition variables</a> without making sure that your conditions actually make sense and accessed atomically. As a wise man once said, “the safest program is the one that does not run at all”, and a deadlocking program is very close to that.</p><p>The part about condition variables might be something that can be improved in Rust. Without this, you can easily end up in situations where you wait forever or your conditions are actually inconsistent. Currently Rust’s condition variables only require a mutex to be passed to the functions for waiting for the condition to be notified, but it would probably also make sense to require passing the same mutex to the constructor and notify functions to make it absolutely clear that you need to ensure that your conditions are always accessed/modified while this specific mutex is locked. Otherwise you might end up in debugging hell.</p><p>Fortunately during development of the plugin I only ran into a simple deadlock, caused by accidentally keeping a mutex locked for too long and then running into conflict with another one. Which is probably an easy trap if the most common way of unlocking a mutex is to let the <a href="https://doc.rust-lang.org/std/sync/struct.MutexGuard.html" rel="noopener" target="_blank">mutex lock guard fall out of scope</a>. This makes it impossible to forget to unlock the mutex, but also makes it less explicit when it is unlocked and sometimes explicit unlocking by manually dropping the mutex lock guard is still necessary.</p><p>So in summary, while a big group of potential problems with multi-threaded programs are prevented by Rust, you still have to be careful to not run into any of the many others. Especially if you use lower-level constructs like condition variables, and not just e.g. channels. Everything is however far more convenient than doing the same in C, and with more support by the compiler, so I definitely prefer writing such code in Rust over doing the same in C.</p><h5>Missing API</h5><p>As usual, for the first dozen projects using a new library or new bindings to an existing library, you’ll notice some missing bits and pieces. That I missed relatively core part of GStreamer, the <a href="https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/GstRegistry.html" rel="noopener" target="_blank">GstRegistry API</a>, was surprising nonetheless. True, you usually don’t use it directly and I only need to use it here for loading the new plugin from a non-standard location, but it was still surprising. Let’s hope this was the biggest oversight. If you look at the <a href="https://github.com/sdroege/gstreamer-rs/issues" rel="noopener" target="_blank">issues page</a> on GitHub, you’ll find a few other things that are still missing though. But nobody needed them yet, so it’s probably fine for the time being.</p><p>Another part of missing APIs that I noticed during development was that many manual (i.e. not auto-generated) bindings didn’t have the <a href="https://doc.rust-lang.org/std/fmt/trait.Debug.html" rel="noopener" target="_blank">Debug</a> trait implemented, or not in a too useful way. This is solved now, as otherwise I wouldn’t have been able to properly log what is happening inside the element to allow easier debugging later if something goes wrong.</p><p>Apart from that there were also various other smaller things that were missing, or bugs (see below) that I found in the bindings while going through all these. But those seem not very noteworthy – check the commit logs if you’re interested.</p><h5>Bugs, bugs, bgsu</h5><p>I also found a couple of bugs in the bindings. They can be broadly categorized in two categories</p><ul><li>Annotation bugs in GStreamer. The auto-generated parts of the bindings are generated from an XML description of the API, that is generated from the C headers and code and annotations in there. There were a couple of annotations that were wrong (or missing) in GStreamer, which then caused memory leaks in my case. Such mistakes could also easily cause memory-safety issues though. The annotations are fixed now, which will also benefit all the other language bindings for GStreamer (and I’m not sure why nobody noticed the memory leaks there before me).</li><li>Bugs in the manually written parts of the bindings. Similarly to the above, there was one memory leak and another case where a function could’ve returned <i>NULL</i> but did not have this case covered on the Rust-side by returning an <a href="https://doc.rust-lang.org/std/option/enum.Option.html" rel="noopener" target="_blank">Option&lt;_&gt;</a>.</li></ul><p>Generally I was quite happy with the lack of bugs though, the bindings are really ready for production at this point. And especially, all the bugs that I found are things that are unfortunately “normal” and common when writing code in C, while Rust is preventing exactly these classes of bugs. As such, they have to be solved only once at the bindings layer and then you’re free of them and you don’t have to spent any brain capacity on their existence anymore and can use your brain to solve the actual task at hand.</p><h5>Inconvenient API</h5><p>Similar to the missing API, whenever using some rather new API you will find things that are inconvenient and could ideally be done better. The biggest case here was the <a href="https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/GstSegment.html" rel="noopener" target="_blank">GstSegment</a> API. A segment represents a (potentially open-ended) playback range and contains all the information to convert timestamps to the different time bases used in GStreamer. I’m not going to get into details here, best check the documentation for them.</p><p>A segment can be in different formats, e.g. in time or bytes. In the C API this is handled by storing the format inside the segment, and requiring you to pass the format together with the value to every function call, and internally there are some checks then that let the function fail if there is a format mismatch. In the previous version of the Rust segment API, this was done the same, and caused lots of <i>unwrap()</i> calls in this element.</p><p>But in Rust we can do better, and the new API for the segment now encodes the format in the type system (i.e. there is a <i>Segment&lt;Time&gt;</i>) and only values with the correct type (e.g. <i>ClockTime</i>) can be passed to the corresponding functions of the segment. In addition there is a type for a generic segment (which still has all the runtime checks) and functions to “cast” between the two.</p><p>Overall this gives more type-safety (the compiler already checks that you don’t mix calculations between seconds and bytes) and makes the API usage more convenient as various error conditions just can’t happen and thus don’t have to be handled. Or like in C, are simply ignored and not handled, potentially leaving a trap that can cause hard to debug bugs at a later time.</p><p>That Rust requires all errors to be handled makes it very obvious how many potential error cases the average C code out there is not handling at all, and also shows that a more expressive language than C can easily prevent many of these error cases at compile-time already.</p></description> <pubDate>Thu, 14 Dec 2017 22:41:50 +0000</pubDate></item><item> <title>Ubuntu Podcast from the UK LoCo: S10E41 – Round Glorious Canvas - Ubuntu Podcast</title> <guid isPermaLink="false">https://ubuntupodcast.org/?p=1160</guid> <link>http://ubuntupodcast.org/2017/12/14/s10e41-round-glorious-canvas/</link> <description><p>This week we’ve taken a stroll around a parallel universe and watched some YouTube. Patreon updates it’s fee structure and then realises it was a terrible idea, Mozilla releases a speech-to-text engine, Oumuamua gets probed and Microsoft release the Q# quantum programming language.</p> <p>It’s Season Ten Episode Forty-One of the Ubuntu Podcast! <a href="https://twitter.com/popey" title="popey on Twitter">Alan Pope</a>, <a href="https://twitter.com/marxjohnson" title="Mark on Twitter">Mark Johnson</a> and <a href="https://twitter.com/m_wimpress" title="Martin on Twitter">Martin Wimpress</a> are connected and speaking to your brain.</p><p>In this week’s show:</p><ul><li>We discuss what we’ve been up to recently:<ul><li>Mark has been exploring Oxford in a parallel universe.</li><li>Alan has been watching <a href="https://www.youtube.com/platform32">YouTube</a>.</li></ul></li><li>We discuss the news:<ul><li>Patreon announce that they are <a href="https://blog.patreon.com/updating-patreons-fee-structure/">Updating Patreon’s Fee Structure</a> and the Internet caught fire. <strong>Since recording this episode Patreon have said, <a href="https://blog.patreon.com/not-rolling-out-fees-change/">We messed up. We’re sorry, and we’re not rolling out the fees change</a></strong>.</li><li><a href="https://blog.mozilla.org/blog/2017/11/29/announcing-the-initial-release-of-mozillas-open-source-speech-recognition-model-and-voice-dataset/">Mozilla releases Speech-to-text engine and a voice training dataset</a></li><li><a href="http://www.bbc.co.uk/news/science-environment-42329244">Oumuamua to be probed</a></li><li><a href="https://arstechnica.com/gadgets/2017/12/microsofts-q-quantum-programming-language-out-now-in-preview/">Microsoft’s Q# quantum programming language out now in preview</a></li></ul></li><li>We discuss the community news:<ul><li><a href="https://kubuntu.org/news/testing-a-switch-to-breeze-dark-plasma-theme-by-default/">Testing a switch to default Breeze-Dark Plasma theme in Bionic daily isos and default settings</a></li><li><a href="https://blog.simos.info/how-to-migrate-lxd-from-deb-ppa-package-to-snap-package/">How to migrate LXD from DEB/PPA package to Snap package</a></li><li><a href="https://clivejo.com/bye-bye-lastpass-hello-bitwarden/">Bye Bye LastPass, hello bitwarden</a></li></ul></li><li>We mention some events:<ul><li><a href="https://codein.withgoogle.com/">Google CodeIn</a>: 28th November to 17th January 2018 – All around the world.</li><li><a href="https://fosdem.org/2018/">FOSDEM 2018</a>: 3 &amp; 4 February 2018. Brussels, Belgium.</li><li><a href="http://thinkonbytes.blogspot.co.uk/2017/12/3rd-ubucon-europe-2018.html">UbuCon Europe 2018</a>: 27th, 28th and 29th of April 2018. Xixón, Spain.</li></ul></li><li>This weeks cover image is taken from <a href="https://upload.wikimedia.org/wikipedia/commons/f/fb/Kim_Jong-il_in_North_Korean_propaganda_(6075328850).jpg" rel="magnific">Wikimedia</a>.<p></p></li></ul><p>That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to <a href="mailto:[email protected]">[email protected]</a> or <a href="http://twitter.com/ubuntupodcast" title="Ubuntu Podcast on Twitter">Tweet us</a> or <a href="http://www.facebook.com/UbuntuUKPodcast" title="Ubuntu Podcast on Facebook">Comment on our Facebook page</a> or <a href="https://plus.google.com/+ubuntupodcast" title="Ubuntu Podcast on Google+">comment on our Google+ page</a> or <a href="http://www.reddit.com/r/UbuntuPodcast/" title="Ubuntu Podcast on Reddit">comment on our sub-Reddit</a>.</p><ul><li>Join us in the <a href="http://ubuntupodcast.org/telegram/" title="Ubuntu Podcast Chatter group on Telegram">Ubuntu Podcast Chatter</a> group on <a href="https://telegram.org/" title="Telegram">Telegram</a></li></ul></description> <pubDate>Thu, 14 Dec 2017 15:00:35 +0000</pubDate> <enclosure url="http://static.ubuntupodcast.org/ubuntupodcast/s10/e41/ubuntupodcast_s10e41.mp3" length="31357056" type="audio/mpeg"/></item><item> <title>Chris Glass: Magic URLs in the Ubuntu ecosystem</title> <guid isPermaLink="false">tag:tribaal.io,2017-12-13:/magic-urls.html</guid> <link>https://tribaal.io/magic-urls.html</link> <description><p>Because of the distributed nature of Ubuntu development, it is sometimes alittle difficult for me to keep track of the "special" URLs for variousactions or reports that I'm regularly interested in.</p><p>Therefore I started gathering them in my personal wiki (I use the excellent<a href="http://zim-wiki.org/">"zim" desktop wiki</a>), and realized some of my colleaguesand friends would be interested in that list as well. I'll do my best to keepthis blog post up-to-date as I discover new ones.</p><p><img alt="A magic book" src="https://tribaal.io/images/magic.png" title="Not quite a list of spells" /></p><p>If you know of other candidates for this list, please don't hesitate to <a href="https://twitter.com/3baal">get intouch</a>!</p><p>Behold, tribaal's "secret URL" list!</p><h3>Pending SRUs</h3><p>Once a package has been uploaded to a -proposed pocket, it needs to be verifiedas per <a href="https://wiki.ubuntu.com/StableReleaseUpdates">the SRU process</a>.Packages pending<a href="https://wiki.ubuntu.com/QATeam/PerformingSRUVerification">verification</a> end upin this list.</p><p><a href="https://people.canonical.com/~ubuntu-archive/pending-sru.html">https://people.canonical.com/~ubuntu-archive/pending-sru.html</a></p><h3>Sponsorship queue</h3><p>People who don't have upload rights for the package they fixed need to requestsponsorship. This queue is the place to check if you're waiting for someone topick it up and upload it.</p><p><a href="http://reqorts.qa.ubuntu.com/reports/sponsoring/">http://reqorts.qa.ubuntu.com/reports/sponsoring/</a></p><h3>Upload queue</h3><p>A log of what got uploaded (and to which pocket) for a particular release, andalso a queue of packages that have been uploaded and are now waiting for reviewbefore entering the archive.</p><p>For the active development release this is for brand new packages, for frozenreleases these are SRU packages. Once approved at this step, the packagesenter -proposed.</p><p><a href="https://launchpad.net/ubuntu/xenial/+queue?queue_state=1">https://launchpad.net/ubuntu/xenial/+queue?queue_state=1</a></p><h3>The launchpad build farm</h3><p>A list of all the builders Launchpad currently has, broken down byarchitecture. You can look at jobs being built in real time, and the occupationlevel of the whole build farm in here as well.</p><p><a href="https://launchpad.net/builders">https://launchpad.net/builders</a></p><h3>Proposed migration excuses</h3><p>For the currently in-development Ubuntu release, packages are first uploaded to-proposed, then a set of conditions need to be met before it can be promoted tothe released pockets. The list of packages that have failed this automaticmigration and the reason why they haven't can be found on this page.</p><p><a href="https://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html">https://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html</a></p><h3>Merge-O-matic</h3><p>Not really a "magic" URL, but this system gathers information and lists for theautomatic merging system, that merges debian packages to the developmentrelease of Ubuntu.</p><p><a href="https://merges.ubuntu.com/">https://merges.ubuntu.com/</a></p><h3>Transitions tracker</h3><p>This page tracks transitions, which are toolchain changes or other packageupdates with "lots" of dependencies. This tracks the dependencies build status.</p><p><a href="https://people.canonical.com/~ubuntu-archive/transitions/html/">https://people.canonical.com/~ubuntu-archive/transitions/html/</a></p></description> <pubDate>Wed, 13 Dec 2017 09:05:00 +0000</pubDate></item><item> <title>Rhonda D&#39;Vine: #metoo</title> <guid isPermaLink="true">http://rhonda.deb.at/blog/2017/12/13#metoo</guid> <link>http://rhonda.deb.at/blog/2017/12/13#metoo</link> <description><p>I long thought about whether I should post a/my #metoo. It wasn't a rape. Nothing really happened. And a lot of these stories are very disturbing.</p> <p>And yet it still it bothers me every now and then. I was in school age, late elementary or lower school ... In my hometown there is a cinema. Young as we've been we weren't allowed to see Rambo/Rocky. Not that I was very interested in the movie ... But there the door to the screening room stood open. And curious as we were we looked through the door. The projectionist saw us and waved us in. It was exciting to see a moview from that perspective that was forbidden to us.</p> <p>He explained to us how the machines worked, showed us how the film rolls were put in and showed us how to see the signals on the screen which are the sign to turn on the second projector with the new roll.</p> <p>During these explenations he was standing very close to us. Really close. He put his arm around us. The hand moved towards the crotch. It was unpleasantly and we knew that it wasn't all right. But screaming? We weren't allowed to be there ... So we thanked him nicely and retreated disturbed. The movie wasn't that good anyway.</p> <p>Nothing really happened, and we didn't say anything.</p> <p align="right"> <i><a href="http://rhonda.deb.at/blog/personal">/personal</a> | <a href="http://rhonda.deb.at/blog/personal/metoo.html">permanent link</a> | <a href="http://rhonda.deb.at/blog/personal/metoo.html">Comments: 2</a> | <a href="http://flattr.com/thing/46312/Rhondas-Blog" target="_blank"><img alt="Flattr this" border="0" src="http://api.flattr.com/button/button-compact-static-100x17.png" title="Flattr this" /></a></i></p></description> <pubDate>Wed, 13 Dec 2017 08:48:00 +0000</pubDate></item> </channel></rss> If you would like to create a banner that links to this page (i.e. this validation result), do the following:
Download the "valid RSS" banner.
Upload the image to your own server. (This step is important. Please do not link directly to the image on this server.)
Add this HTML to your page (change the image src attribute if necessary):
If you would like to create a text link instead, here is the URL you can use:
http://www.feedvalidator.org/check.cgi?url=http%3A//planet.ubuntu.com/rss20.xml