<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web02.fireside.fm</fireside:hostname>
    <fireside:genDate>Sun, 12 Apr 2026 18:07:54 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>TechSNAP - Episodes Tagged with “Arc”</title>
    <link>https://techsnap.systems/tags/arc</link>
    <pubDate>Fri, 21 Feb 2020 18:00:00 -0800</pubDate>
    <description>Systems, Network, and Administration Podcast. Every two weeks TechSNAP covers the stories that impact those of us in the tech industry, and all of us that follow it. Every episode we dedicate a portion of the show to answer audience questions, discuss best practices, and solving your problems.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Systems, Network, and Administration Podcast. </itunes:subtitle>
    <itunes:author>Jupiter Broadcasting</itunes:author>
    <itunes:summary>Systems, Network, and Administration Podcast. Every two weeks TechSNAP covers the stories that impact those of us in the tech industry, and all of us that follow it. Every episode we dedicate a portion of the show to answer audience questions, discuss best practices, and solving your problems.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/9/95197d05-40d6-4e68-8e0b-2f586ce8dc55/cover.jpg?v=4"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:owner>
      <itunes:name>Jupiter Broadcasting</itunes:name>
      <itunes:email>chris@jupiterbroadcasting.com</itunes:email>
    </itunes:owner>
<itunes:category text="News">
  <itunes:category text="Tech News"/>
</itunes:category>
<item>
  <title>423: Hopeful for HAMR</title>
  <link>https://techsnap.systems/423</link>
  <guid isPermaLink="false">579b3028-f4b8-408a-ad04-ee0f8d017f78</guid>
  <pubDate>Fri, 21 Feb 2020 18:00:00 -0800</pubDate>
  <author>Jupiter Broadcasting</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/95197d05-40d6-4e68-8e0b-2f586ce8dc55/579b3028-f4b8-408a-ad04-ee0f8d017f78.mp3" length="21313956" type="audio/mp3"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Jupiter Broadcasting</itunes:author>
  <itunes:subtitle>We explore the potential of heat-assisted magnetic recording and get excited about a possibly persistent L2ARC.</itunes:subtitle>
  <itunes:duration>29:36</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/9/95197d05-40d6-4e68-8e0b-2f586ce8dc55/cover.jpg?v=4"/>
  <description>We explore the potential of heat-assisted magnetic recording and get excited about a possibly persistent L2ARC. 
Plus Jim's journeys with Clear Linux, and why Ubuntu 18.04.4 is a maintenance release worth talking about. 
</description>
  <itunes:keywords>Ubuntu, 18.04.4, 18.04, LTS, Linux, WiFi, hardware enablement, maintenance release, Clear Linux OS, Linux desktop, Intel, Clear Linux, benchmarks, performance, swupd, ZFS, ZFS on Linux, ZoL, MobaXterm,  LRU, WSL, Windows, Microsoft, L2ARC, ARC, filesystems, cache, caching, HDD, storage, hard drives, HAMR, SMR, MAMR, Seagate, Western Digital, latency, throughput, DevOps, TechSNAP, Jupiter Broadcasting, A Cloud Guru, Linux Academy, sysadmin podcast, </itunes:keywords>
  <content:encoded>
    <![CDATA[<p>We explore the potential of heat-assisted magnetic recording and get excited about a possibly persistent L2ARC. </p>

<p>Plus Jim&#39;s journeys with Clear Linux, and why Ubuntu 18.04.4 is a maintenance release worth talking about.</p><p>Links:</p><ul><li><a title="Ubuntu 18.04.4 LTS: here&#39;s what&#39;s new" rel="nofollow" href="https://arstechnica.com/gadgets/2020/02/ubuntu-18-04-4-lts-released-wednesday-heres-whats-new/">Ubuntu 18.04.4 LTS: here's what's new</a> &mdash; It's not as shiny and exciting as entirely new versions, of course, but it does pack in some worthwhile security and bugfix upgrades, as well as support for more and newer hardware.</li><li><a title="18.04.4 - Ubuntu Wiki" rel="nofollow" href="https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes/ChangeSummary/18.04.4">18.04.4 - Ubuntu Wiki</a></li><li><a title="MobaXterm" rel="nofollow" href="https://mobaxterm.mobatek.net/">MobaXterm</a> &mdash; Enhanced terminal for Windows with X11 server, tabbed SSH client, network tools and much more.</li><li><a title="Linux distro review: Intel’s own Clear Linux OS" rel="nofollow" href="https://arstechnica.com/gadgets/2020/02/linux-distro-review-intels-own-clear-linux-os/?comments=1">Linux distro review: Intel’s own Clear Linux OS</a> &mdash; There's not much question that Clear Linux is your best bet if you want to turn in the best possible benchmark numbers. The question not addressed here is, what's it like to run Clear Linux as a daily driver? We were curious, so we took it for a spin.</li><li><a title="Clear Linux* Project" rel="nofollow" href="https://clearlinux.org/">Clear Linux* Project</a> &mdash; Clear Linux OS is an open source, rolling release Linux distribution optimized for performance and security, from the Cloud to the Edge, designed for customization, and manageability.</li><li><a title="swupd — Documentation for Clear Linux* project" rel="nofollow" href="https://docs.01.org/clearlinux/latest/guides/clear/swupd.html">swupd — Documentation for Clear Linux* project</a></li><li><a title="clr-boot-manager: Kernel &amp; Boot Loader Management" rel="nofollow" href="https://github.com/clearlinux/clr-boot-manager">clr-boot-manager: Kernel &amp; Boot Loader Management</a></li><li><a title="Cannot compile zfs for 5.5-rc2 · Issue #9745 · zfsonlinux/zfs" rel="nofollow" href="https://github.com/zfsonlinux/zfs/issues/9745">Cannot compile zfs for 5.5-rc2 · Issue #9745 · zfsonlinux/zfs</a></li><li><a title="Persistent L2ARC might be coming to ZFS on Linux" rel="nofollow" href="https://arstechnica.com/gadgets/2020/02/zfs-on-linux-should-get-a-persistent-ssd-read-cache-feature-soon/">Persistent L2ARC might be coming to ZFS on Linux</a> &mdash; The primary ARC is kept in system RAM, but an L2ARC device can be created from one or more fast disks. In a ZFS pool with one or more L2ARC devices, when blocks are evicted from the primary ARC in RAM, they are moved down to L2ARC rather than being thrown away entirely. In the past, this feature has been of limited value, both because indexing a large L2ARC occupies system RAM which could have been better used for primary ARC and because L2ARC was not persistent across reboots.</li><li><a title="Persistent L2ARC by gamanakis · Pull Request #9582 · zfsonlinux/zfs" rel="nofollow" href="https://github.com/zfsonlinux/zfs/pull/9582">Persistent L2ARC by gamanakis · Pull Request #9582 · zfsonlinux/zfs</a> &mdash; This feature implements a light-weight persistent L2ARC metadata structure that allows L2ARC contents to be recovered after a reboot. This significantly eases the impact a reboot has on read performance on systems with large caches.</li><li><a title="LINUX Unplugged 303: Stateless and Dateless" rel="nofollow" href="https://linuxunplugged.com/303">LINUX Unplugged 303: Stateless and Dateless</a> &mdash; We visit Intel to figure out what Clear Linux is all about and explain a few tricks that make it unique.</li><li><a title="LINUX Unplugged Blog: Clear Linux OS 2019" rel="nofollow" href="https://linuxunplugged.com/articles/clear-linux-os-2019">LINUX Unplugged Blog: Clear Linux OS 2019</a></li><li><a title="HAMR don’t hurt ’em: laser-assisted hard drives are coming in 2020" rel="nofollow" href="https://arstechnica.com/gadgets/2020/02/hamr-dont-hurt-em-laser-assisted-hard-drives-are-coming-in-2020/">HAMR don’t hurt ’em: laser-assisted hard drives are coming in 2020</a> &mdash; Although the 2012 "just around the corner" HAMR drives seem to have been mostly vapor, the technology is a reality now. Seagate has been trialing 16TB HAMR drives with select customers for more than a year and claims that the trials have proved that its HAMR drives are "plug and play replacements" for traditional CMR drives, requiring no special care and having no particular poor use cases compared to the drives we're all used to.</li><li><a title="HAMR Milestone: Seagate Achieves 16TB Capacity on Internal HAMR Test Units" rel="nofollow" href="https://blog.seagate.com/craftsman-ship/hamr-milestone-seagate-achieves-16tb-capacity-on-internal-hamr-test-units/">HAMR Milestone: Seagate Achieves 16TB Capacity on Internal HAMR Test Units</a></li><li><a title="Western Digital debuts 18TB and 20TB near-MAMR disk drives" rel="nofollow" href="https://blocksandfiles.com/2019/09/03/western-digital-18tb-and-20tb-mamr-disk-drives/">Western Digital debuts 18TB and 20TB near-MAMR disk drives</a></li><li><a title="Previously on TechSNAP 341: HAMR Time" rel="nofollow" href="https://techsnap.systems/341">Previously on TechSNAP 341: HAMR Time</a> &mdash; We've got bad news for Wifi-lovers as the KRACK hack takes the world by storm; We have the details &amp; some places to watch to make sure you stay patched. Plus, some distressing revelations about third party access to your personal information through some US mobile carriers. Then we cover the ongoing debate over HAMR, MAMR, and the future of hard drive technology &amp; take a mini deep dive into the world of elliptic curve cryptography.

</li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>We explore the potential of heat-assisted magnetic recording and get excited about a possibly persistent L2ARC. </p>

<p>Plus Jim&#39;s journeys with Clear Linux, and why Ubuntu 18.04.4 is a maintenance release worth talking about.</p><p>Links:</p><ul><li><a title="Ubuntu 18.04.4 LTS: here&#39;s what&#39;s new" rel="nofollow" href="https://arstechnica.com/gadgets/2020/02/ubuntu-18-04-4-lts-released-wednesday-heres-whats-new/">Ubuntu 18.04.4 LTS: here's what's new</a> &mdash; It's not as shiny and exciting as entirely new versions, of course, but it does pack in some worthwhile security and bugfix upgrades, as well as support for more and newer hardware.</li><li><a title="18.04.4 - Ubuntu Wiki" rel="nofollow" href="https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes/ChangeSummary/18.04.4">18.04.4 - Ubuntu Wiki</a></li><li><a title="MobaXterm" rel="nofollow" href="https://mobaxterm.mobatek.net/">MobaXterm</a> &mdash; Enhanced terminal for Windows with X11 server, tabbed SSH client, network tools and much more.</li><li><a title="Linux distro review: Intel’s own Clear Linux OS" rel="nofollow" href="https://arstechnica.com/gadgets/2020/02/linux-distro-review-intels-own-clear-linux-os/?comments=1">Linux distro review: Intel’s own Clear Linux OS</a> &mdash; There's not much question that Clear Linux is your best bet if you want to turn in the best possible benchmark numbers. The question not addressed here is, what's it like to run Clear Linux as a daily driver? We were curious, so we took it for a spin.</li><li><a title="Clear Linux* Project" rel="nofollow" href="https://clearlinux.org/">Clear Linux* Project</a> &mdash; Clear Linux OS is an open source, rolling release Linux distribution optimized for performance and security, from the Cloud to the Edge, designed for customization, and manageability.</li><li><a title="swupd — Documentation for Clear Linux* project" rel="nofollow" href="https://docs.01.org/clearlinux/latest/guides/clear/swupd.html">swupd — Documentation for Clear Linux* project</a></li><li><a title="clr-boot-manager: Kernel &amp; Boot Loader Management" rel="nofollow" href="https://github.com/clearlinux/clr-boot-manager">clr-boot-manager: Kernel &amp; Boot Loader Management</a></li><li><a title="Cannot compile zfs for 5.5-rc2 · Issue #9745 · zfsonlinux/zfs" rel="nofollow" href="https://github.com/zfsonlinux/zfs/issues/9745">Cannot compile zfs for 5.5-rc2 · Issue #9745 · zfsonlinux/zfs</a></li><li><a title="Persistent L2ARC might be coming to ZFS on Linux" rel="nofollow" href="https://arstechnica.com/gadgets/2020/02/zfs-on-linux-should-get-a-persistent-ssd-read-cache-feature-soon/">Persistent L2ARC might be coming to ZFS on Linux</a> &mdash; The primary ARC is kept in system RAM, but an L2ARC device can be created from one or more fast disks. In a ZFS pool with one or more L2ARC devices, when blocks are evicted from the primary ARC in RAM, they are moved down to L2ARC rather than being thrown away entirely. In the past, this feature has been of limited value, both because indexing a large L2ARC occupies system RAM which could have been better used for primary ARC and because L2ARC was not persistent across reboots.</li><li><a title="Persistent L2ARC by gamanakis · Pull Request #9582 · zfsonlinux/zfs" rel="nofollow" href="https://github.com/zfsonlinux/zfs/pull/9582">Persistent L2ARC by gamanakis · Pull Request #9582 · zfsonlinux/zfs</a> &mdash; This feature implements a light-weight persistent L2ARC metadata structure that allows L2ARC contents to be recovered after a reboot. This significantly eases the impact a reboot has on read performance on systems with large caches.</li><li><a title="LINUX Unplugged 303: Stateless and Dateless" rel="nofollow" href="https://linuxunplugged.com/303">LINUX Unplugged 303: Stateless and Dateless</a> &mdash; We visit Intel to figure out what Clear Linux is all about and explain a few tricks that make it unique.</li><li><a title="LINUX Unplugged Blog: Clear Linux OS 2019" rel="nofollow" href="https://linuxunplugged.com/articles/clear-linux-os-2019">LINUX Unplugged Blog: Clear Linux OS 2019</a></li><li><a title="HAMR don’t hurt ’em: laser-assisted hard drives are coming in 2020" rel="nofollow" href="https://arstechnica.com/gadgets/2020/02/hamr-dont-hurt-em-laser-assisted-hard-drives-are-coming-in-2020/">HAMR don’t hurt ’em: laser-assisted hard drives are coming in 2020</a> &mdash; Although the 2012 "just around the corner" HAMR drives seem to have been mostly vapor, the technology is a reality now. Seagate has been trialing 16TB HAMR drives with select customers for more than a year and claims that the trials have proved that its HAMR drives are "plug and play replacements" for traditional CMR drives, requiring no special care and having no particular poor use cases compared to the drives we're all used to.</li><li><a title="HAMR Milestone: Seagate Achieves 16TB Capacity on Internal HAMR Test Units" rel="nofollow" href="https://blog.seagate.com/craftsman-ship/hamr-milestone-seagate-achieves-16tb-capacity-on-internal-hamr-test-units/">HAMR Milestone: Seagate Achieves 16TB Capacity on Internal HAMR Test Units</a></li><li><a title="Western Digital debuts 18TB and 20TB near-MAMR disk drives" rel="nofollow" href="https://blocksandfiles.com/2019/09/03/western-digital-18tb-and-20tb-mamr-disk-drives/">Western Digital debuts 18TB and 20TB near-MAMR disk drives</a></li><li><a title="Previously on TechSNAP 341: HAMR Time" rel="nofollow" href="https://techsnap.systems/341">Previously on TechSNAP 341: HAMR Time</a> &mdash; We've got bad news for Wifi-lovers as the KRACK hack takes the world by storm; We have the details &amp; some places to watch to make sure you stay patched. Plus, some distressing revelations about third party access to your personal information through some US mobile carriers. Then we cover the ongoing debate over HAMR, MAMR, and the future of hard drive technology &amp; take a mini deep dive into the world of elliptic curve cryptography.

</li></ul>]]>
  </itunes:summary>
</item>
<item>
  <title>408: Apollo's ARC</title>
  <link>https://techsnap.systems/408</link>
  <guid isPermaLink="false">2577b50c-e740-46c8-a75b-14f074cb812a</guid>
  <pubDate>Fri, 26 Jul 2019 00:15:00 -0700</pubDate>
  <author>Jupiter Broadcasting</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/95197d05-40d6-4e68-8e0b-2f586ce8dc55/2577b50c-e740-46c8-a75b-14f074cb812a.mp3" length="25365234" type="audio/mp3"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Jupiter Broadcasting</itunes:author>
  <itunes:subtitle>We take a look at the amazing abilities of the Apollo Guidance Computer and Jim breaks down everything you need to know about the ZFS ARC.</itunes:subtitle>
  <itunes:duration>35:13</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/9/95197d05-40d6-4e68-8e0b-2f586ce8dc55/cover.jpg?v=4"/>
  <description>We take a look at the amazing abilities of the Apollo Guidance Computer and Jim breaks down everything you need to know about the ZFS ARC. 
Plus an update on ZoL SIMD acceleration, your feedback, and an interesting new neuromorphic system from Intel. 
</description>
  <itunes:keywords>virtualization, openzfs, zfs, kvm, qemu, vhd, qcow, qcow2, ARC, memory, page cache, caching, ZFS on Linux, ZoL, SIMD, floating point, fpu, apollo, apollo anniversary, nasa, retro computing, magnetic core, core rope, AGC, apollo guidance computer, intel, dancing demon, kernel module, loihi, neuromorphic computing, text adventure, punch cards, Margaret Hamilton, neural networks, machine learning, ai, pohoiki, snapshots, sysadmin, trs-80, cloud, Chris Siebenmann,  DevOps, TechSNAP</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>We take a look at the amazing abilities of the Apollo Guidance Computer and Jim breaks down everything you need to know about the ZFS ARC. </p>

<p>Plus an update on ZoL SIMD acceleration, your feedback, and an interesting new neuromorphic system from Intel.</p><p>Links:</p><ul><li><a title="ZFS On Linux Has Figured Out A Way To Restore SIMD Support On Linux 5.0+" rel="nofollow" href="https://www.phoronix.com/scan.php?page=news_item&amp;px=ZFS-On-Linux-Restoring-SIMD">ZFS On Linux Has Figured Out A Way To Restore SIMD Support On Linux 5.0+</a> &mdash; Those running ZFS On Linux (ZoL) on post-5.0 (and pre-5.0 supported LTS releases) have seen big performance hits to the ZFS encryption performance in particular. That came due to upstream breaking an interface used by ZFS On Linux and admittedly not caring about ZoL due to it being an out-of-tree user. But now several kernel releases later, a workaround has been devised. </li><li><a title="ZFS On Linux Runs Into A Snag With Linux 5.0" rel="nofollow" href="https://www.phoronix.com/scan.php?page=news_item&amp;px=ZFS-On-Linux-5.0-Problem">ZFS On Linux Runs Into A Snag With Linux 5.0</a></li><li><a title="NixOS Takes Action After 1.2GB/s ZFS Encryption Speed Drops To 200MB/s With Linux 5.0+" rel="nofollow" href="https://www.phoronix.com/scan.php?page=news_item&amp;px=NixOS-Linux-5.0-ZFS-FPU-Drop">NixOS Takes Action After 1.2GB/s ZFS Encryption Speed Drops To 200MB/s With Linux 5.0+</a> &mdash;  A NixOS developer reports that the functions no longer exported by Linux 5.0+ and previously used by ZoL for AVX/AES-NI support end up dropping the ZFS data-set encryption performance to 200MB/s where as pre-5.0 kernels ran around 1.2GB/s</li><li><a title="Linux 5.0 compat: SIMD compatibility · zfsonlinux/zfs@e5db313" rel="nofollow" href="https://github.com/zfsonlinux/zfs/commit/e5db31349484e5e859c7a942eb15b98d68ce5b4d">Linux 5.0 compat: SIMD compatibility · zfsonlinux/zfs@e5db313</a> &mdash; Restore the SIMD optimization for 4.19.38 LTS, 4.14.120 LTS,
and 5.0 and newer kernels.  This is accomplished by leveraging
the fact that by definition dedicated kernel threads never need
to concern themselves with saving and restoring the user FPU state.
Therefore, they may use the FPU as long as we can guarantee user
tasks always restore their FPU state before context switching back
to user space.</li><li><a title="no SIMD acceleration · Issue #8793 · zfsonlinux/zfs" rel="nofollow" href="https://github.com/zfsonlinux/zfs/issues/8793">no SIMD acceleration · Issue #8793 · zfsonlinux/zfs</a> &mdash; 4.14.x, 4.19.x, 5.x all have no SIMD acceleration, it is like a turtle. very slow.

</li><li><a title="Chris&#39;s Wiki :: ZFS on Linux still has annoying issues with ARC size" rel="nofollow" href="https://utcc.utoronto.ca/~cks/space/blog/linux/ZFSOnLinuxARCShrinkage">Chris's Wiki :: ZFS on Linux still has annoying issues with ARC size</a> &mdash; One of the frustrating things about operating ZFS on Linux is that the ARC size is critical but ZFS's auto-tuning of it is opaque and apparently prone to malfunctions, where your ARC will mysteriously shrink drastically and then stick there.
</li><li><a title="Software woven into wire, Core rope and the Apollo Guidance Computer" rel="nofollow" href="http://www.righto.com/2019/07/software-woven-into-wire-core-rope-and.html">Software woven into wire, Core rope and the Apollo Guidance Computer</a> &mdash; One of the first computers to use integrated circuits, the Apollo Guidance Computer was lightweight enough and small enough to fly in space. An unusual feature that contributed to its small size was core rope memory, a technique of physically weaving software into high-density storage.</li><li><a title="Virtual Apollo Guidance Computer (AGC) software" rel="nofollow" href="https://github.com/virtualagc/virtualagc">Virtual Apollo Guidance Computer (AGC) software</a> &mdash; Since you are looking at this README file, you are in the "master" branch of the repository, which contains source-code transcriptions of the original Project Apollo software for the Apollo Guidance Computer (AGC) and Abort Guidance System (AGS), as well as our software for emulating the AGC, AGS, and some of their peripheral devices (such as the display-keyboard unit, or DSKY).</li><li><a title="The Underappreciated Power of the Apollo Computer - The Atlantic" rel="nofollow" href="https://www.theatlantic.com/science/archive/2019/07/underappreciated-power-apollo-computer/594121/">The Underappreciated Power of the Apollo Computer - The Atlantic</a> &mdash; Without the computers on board the Apollo spacecraft, there would have been no moon landing, no triumphant first step, no high-water mark for human space travel. A pilot could never have navigated the way to the moon, as if a spaceship were simply a more powerful airplane. The calculations required to make in-flight adjustments and the complexity of the thrust controls outstripped human capacities.</li><li><a title="Brains scale better than CPUs. So Intel is building brains | Ars Technica" rel="nofollow" href="https://arstechnica.com/science/2019/07/brains-scale-better-than-cpus-so-intel-is-building-brains/">Brains scale better than CPUs. So Intel is building brains | Ars Technica</a> &mdash; Neuromorphic engineering—building machines that mimic the function of organic brains in hardware as well as software—is becoming more and more prominent. The field has progressed rapidly, from conceptual beginnings in the late 1980s to experimental field programmable neural arrays in 2006, early memristor-powered device proposals in 2012, IBM's TrueNorth NPU in 2014, and Intel's Loihi neuromorphic processor in 2017. Yesterday, Intel broke a little more new ground with the debut of a larger-scale neuromorphic system, Pohoiki Beach, which integrates 64 of its Loihi chips.
</li><li><a title="Dancing Demon - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=6CCJFQ_bP0E">Dancing Demon - YouTube</a> &mdash; Written in 1979 by Leo Christopherson for the Radio Shack TRS-80 Model I computer. This is the best game ever for at that time.</li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>We take a look at the amazing abilities of the Apollo Guidance Computer and Jim breaks down everything you need to know about the ZFS ARC. </p>

<p>Plus an update on ZoL SIMD acceleration, your feedback, and an interesting new neuromorphic system from Intel.</p><p>Links:</p><ul><li><a title="ZFS On Linux Has Figured Out A Way To Restore SIMD Support On Linux 5.0+" rel="nofollow" href="https://www.phoronix.com/scan.php?page=news_item&amp;px=ZFS-On-Linux-Restoring-SIMD">ZFS On Linux Has Figured Out A Way To Restore SIMD Support On Linux 5.0+</a> &mdash; Those running ZFS On Linux (ZoL) on post-5.0 (and pre-5.0 supported LTS releases) have seen big performance hits to the ZFS encryption performance in particular. That came due to upstream breaking an interface used by ZFS On Linux and admittedly not caring about ZoL due to it being an out-of-tree user. But now several kernel releases later, a workaround has been devised. </li><li><a title="ZFS On Linux Runs Into A Snag With Linux 5.0" rel="nofollow" href="https://www.phoronix.com/scan.php?page=news_item&amp;px=ZFS-On-Linux-5.0-Problem">ZFS On Linux Runs Into A Snag With Linux 5.0</a></li><li><a title="NixOS Takes Action After 1.2GB/s ZFS Encryption Speed Drops To 200MB/s With Linux 5.0+" rel="nofollow" href="https://www.phoronix.com/scan.php?page=news_item&amp;px=NixOS-Linux-5.0-ZFS-FPU-Drop">NixOS Takes Action After 1.2GB/s ZFS Encryption Speed Drops To 200MB/s With Linux 5.0+</a> &mdash;  A NixOS developer reports that the functions no longer exported by Linux 5.0+ and previously used by ZoL for AVX/AES-NI support end up dropping the ZFS data-set encryption performance to 200MB/s where as pre-5.0 kernels ran around 1.2GB/s</li><li><a title="Linux 5.0 compat: SIMD compatibility · zfsonlinux/zfs@e5db313" rel="nofollow" href="https://github.com/zfsonlinux/zfs/commit/e5db31349484e5e859c7a942eb15b98d68ce5b4d">Linux 5.0 compat: SIMD compatibility · zfsonlinux/zfs@e5db313</a> &mdash; Restore the SIMD optimization for 4.19.38 LTS, 4.14.120 LTS,
and 5.0 and newer kernels.  This is accomplished by leveraging
the fact that by definition dedicated kernel threads never need
to concern themselves with saving and restoring the user FPU state.
Therefore, they may use the FPU as long as we can guarantee user
tasks always restore their FPU state before context switching back
to user space.</li><li><a title="no SIMD acceleration · Issue #8793 · zfsonlinux/zfs" rel="nofollow" href="https://github.com/zfsonlinux/zfs/issues/8793">no SIMD acceleration · Issue #8793 · zfsonlinux/zfs</a> &mdash; 4.14.x, 4.19.x, 5.x all have no SIMD acceleration, it is like a turtle. very slow.

</li><li><a title="Chris&#39;s Wiki :: ZFS on Linux still has annoying issues with ARC size" rel="nofollow" href="https://utcc.utoronto.ca/~cks/space/blog/linux/ZFSOnLinuxARCShrinkage">Chris's Wiki :: ZFS on Linux still has annoying issues with ARC size</a> &mdash; One of the frustrating things about operating ZFS on Linux is that the ARC size is critical but ZFS's auto-tuning of it is opaque and apparently prone to malfunctions, where your ARC will mysteriously shrink drastically and then stick there.
</li><li><a title="Software woven into wire, Core rope and the Apollo Guidance Computer" rel="nofollow" href="http://www.righto.com/2019/07/software-woven-into-wire-core-rope-and.html">Software woven into wire, Core rope and the Apollo Guidance Computer</a> &mdash; One of the first computers to use integrated circuits, the Apollo Guidance Computer was lightweight enough and small enough to fly in space. An unusual feature that contributed to its small size was core rope memory, a technique of physically weaving software into high-density storage.</li><li><a title="Virtual Apollo Guidance Computer (AGC) software" rel="nofollow" href="https://github.com/virtualagc/virtualagc">Virtual Apollo Guidance Computer (AGC) software</a> &mdash; Since you are looking at this README file, you are in the "master" branch of the repository, which contains source-code transcriptions of the original Project Apollo software for the Apollo Guidance Computer (AGC) and Abort Guidance System (AGS), as well as our software for emulating the AGC, AGS, and some of their peripheral devices (such as the display-keyboard unit, or DSKY).</li><li><a title="The Underappreciated Power of the Apollo Computer - The Atlantic" rel="nofollow" href="https://www.theatlantic.com/science/archive/2019/07/underappreciated-power-apollo-computer/594121/">The Underappreciated Power of the Apollo Computer - The Atlantic</a> &mdash; Without the computers on board the Apollo spacecraft, there would have been no moon landing, no triumphant first step, no high-water mark for human space travel. A pilot could never have navigated the way to the moon, as if a spaceship were simply a more powerful airplane. The calculations required to make in-flight adjustments and the complexity of the thrust controls outstripped human capacities.</li><li><a title="Brains scale better than CPUs. So Intel is building brains | Ars Technica" rel="nofollow" href="https://arstechnica.com/science/2019/07/brains-scale-better-than-cpus-so-intel-is-building-brains/">Brains scale better than CPUs. So Intel is building brains | Ars Technica</a> &mdash; Neuromorphic engineering—building machines that mimic the function of organic brains in hardware as well as software—is becoming more and more prominent. The field has progressed rapidly, from conceptual beginnings in the late 1980s to experimental field programmable neural arrays in 2006, early memristor-powered device proposals in 2012, IBM's TrueNorth NPU in 2014, and Intel's Loihi neuromorphic processor in 2017. Yesterday, Intel broke a little more new ground with the debut of a larger-scale neuromorphic system, Pohoiki Beach, which integrates 64 of its Loihi chips.
</li><li><a title="Dancing Demon - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=6CCJFQ_bP0E">Dancing Demon - YouTube</a> &mdash; Written in 1979 by Leo Christopherson for the Radio Shack TRS-80 Model I computer. This is the best game ever for at that time.</li></ul>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
