<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web01.fireside.fm</fireside:hostname>
    <fireside:genDate>Sun, 12 Apr 2026 18:00:53 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>TechSNAP - Episodes Tagged with “Zfs On Linux”</title>
    <link>https://techsnap.systems/tags/zfs%20on%20linux</link>
    <pubDate>Fri, 21 Feb 2020 18:00:00 -0800</pubDate>
    <description>Systems, Network, and Administration Podcast. Every two weeks TechSNAP covers the stories that impact those of us in the tech industry, and all of us that follow it. Every episode we dedicate a portion of the show to answer audience questions, discuss best practices, and solving your problems.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Systems, Network, and Administration Podcast. </itunes:subtitle>
    <itunes:author>Jupiter Broadcasting</itunes:author>
    <itunes:summary>Systems, Network, and Administration Podcast. Every two weeks TechSNAP covers the stories that impact those of us in the tech industry, and all of us that follow it. Every episode we dedicate a portion of the show to answer audience questions, discuss best practices, and solving your problems.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/9/95197d05-40d6-4e68-8e0b-2f586ce8dc55/cover.jpg?v=4"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:owner>
      <itunes:name>Jupiter Broadcasting</itunes:name>
      <itunes:email>chris@jupiterbroadcasting.com</itunes:email>
    </itunes:owner>
<itunes:category text="News">
  <itunes:category text="Tech News"/>
</itunes:category>
<item>
  <title>423: Hopeful for HAMR</title>
  <link>https://techsnap.systems/423</link>
  <guid isPermaLink="false">579b3028-f4b8-408a-ad04-ee0f8d017f78</guid>
  <pubDate>Fri, 21 Feb 2020 18:00:00 -0800</pubDate>
  <author>Jupiter Broadcasting</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/95197d05-40d6-4e68-8e0b-2f586ce8dc55/579b3028-f4b8-408a-ad04-ee0f8d017f78.mp3" length="21313956" type="audio/mp3"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Jupiter Broadcasting</itunes:author>
  <itunes:subtitle>We explore the potential of heat-assisted magnetic recording and get excited about a possibly persistent L2ARC.</itunes:subtitle>
  <itunes:duration>29:36</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/9/95197d05-40d6-4e68-8e0b-2f586ce8dc55/cover.jpg?v=4"/>
  <description>We explore the potential of heat-assisted magnetic recording and get excited about a possibly persistent L2ARC. 
Plus Jim's journeys with Clear Linux, and why Ubuntu 18.04.4 is a maintenance release worth talking about. 
</description>
  <itunes:keywords>Ubuntu, 18.04.4, 18.04, LTS, Linux, WiFi, hardware enablement, maintenance release, Clear Linux OS, Linux desktop, Intel, Clear Linux, benchmarks, performance, swupd, ZFS, ZFS on Linux, ZoL, MobaXterm,  LRU, WSL, Windows, Microsoft, L2ARC, ARC, filesystems, cache, caching, HDD, storage, hard drives, HAMR, SMR, MAMR, Seagate, Western Digital, latency, throughput, DevOps, TechSNAP, Jupiter Broadcasting, A Cloud Guru, Linux Academy, sysadmin podcast, </itunes:keywords>
  <content:encoded>
    <![CDATA[<p>We explore the potential of heat-assisted magnetic recording and get excited about a possibly persistent L2ARC. </p>

<p>Plus Jim&#39;s journeys with Clear Linux, and why Ubuntu 18.04.4 is a maintenance release worth talking about.</p><p>Links:</p><ul><li><a title="Ubuntu 18.04.4 LTS: here&#39;s what&#39;s new" rel="nofollow" href="https://arstechnica.com/gadgets/2020/02/ubuntu-18-04-4-lts-released-wednesday-heres-whats-new/">Ubuntu 18.04.4 LTS: here's what's new</a> &mdash; It's not as shiny and exciting as entirely new versions, of course, but it does pack in some worthwhile security and bugfix upgrades, as well as support for more and newer hardware.</li><li><a title="18.04.4 - Ubuntu Wiki" rel="nofollow" href="https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes/ChangeSummary/18.04.4">18.04.4 - Ubuntu Wiki</a></li><li><a title="MobaXterm" rel="nofollow" href="https://mobaxterm.mobatek.net/">MobaXterm</a> &mdash; Enhanced terminal for Windows with X11 server, tabbed SSH client, network tools and much more.</li><li><a title="Linux distro review: Intel’s own Clear Linux OS" rel="nofollow" href="https://arstechnica.com/gadgets/2020/02/linux-distro-review-intels-own-clear-linux-os/?comments=1">Linux distro review: Intel’s own Clear Linux OS</a> &mdash; There's not much question that Clear Linux is your best bet if you want to turn in the best possible benchmark numbers. The question not addressed here is, what's it like to run Clear Linux as a daily driver? We were curious, so we took it for a spin.</li><li><a title="Clear Linux* Project" rel="nofollow" href="https://clearlinux.org/">Clear Linux* Project</a> &mdash; Clear Linux OS is an open source, rolling release Linux distribution optimized for performance and security, from the Cloud to the Edge, designed for customization, and manageability.</li><li><a title="swupd — Documentation for Clear Linux* project" rel="nofollow" href="https://docs.01.org/clearlinux/latest/guides/clear/swupd.html">swupd — Documentation for Clear Linux* project</a></li><li><a title="clr-boot-manager: Kernel &amp; Boot Loader Management" rel="nofollow" href="https://github.com/clearlinux/clr-boot-manager">clr-boot-manager: Kernel &amp; Boot Loader Management</a></li><li><a title="Cannot compile zfs for 5.5-rc2 · Issue #9745 · zfsonlinux/zfs" rel="nofollow" href="https://github.com/zfsonlinux/zfs/issues/9745">Cannot compile zfs for 5.5-rc2 · Issue #9745 · zfsonlinux/zfs</a></li><li><a title="Persistent L2ARC might be coming to ZFS on Linux" rel="nofollow" href="https://arstechnica.com/gadgets/2020/02/zfs-on-linux-should-get-a-persistent-ssd-read-cache-feature-soon/">Persistent L2ARC might be coming to ZFS on Linux</a> &mdash; The primary ARC is kept in system RAM, but an L2ARC device can be created from one or more fast disks. In a ZFS pool with one or more L2ARC devices, when blocks are evicted from the primary ARC in RAM, they are moved down to L2ARC rather than being thrown away entirely. In the past, this feature has been of limited value, both because indexing a large L2ARC occupies system RAM which could have been better used for primary ARC and because L2ARC was not persistent across reboots.</li><li><a title="Persistent L2ARC by gamanakis · Pull Request #9582 · zfsonlinux/zfs" rel="nofollow" href="https://github.com/zfsonlinux/zfs/pull/9582">Persistent L2ARC by gamanakis · Pull Request #9582 · zfsonlinux/zfs</a> &mdash; This feature implements a light-weight persistent L2ARC metadata structure that allows L2ARC contents to be recovered after a reboot. This significantly eases the impact a reboot has on read performance on systems with large caches.</li><li><a title="LINUX Unplugged 303: Stateless and Dateless" rel="nofollow" href="https://linuxunplugged.com/303">LINUX Unplugged 303: Stateless and Dateless</a> &mdash; We visit Intel to figure out what Clear Linux is all about and explain a few tricks that make it unique.</li><li><a title="LINUX Unplugged Blog: Clear Linux OS 2019" rel="nofollow" href="https://linuxunplugged.com/articles/clear-linux-os-2019">LINUX Unplugged Blog: Clear Linux OS 2019</a></li><li><a title="HAMR don’t hurt ’em: laser-assisted hard drives are coming in 2020" rel="nofollow" href="https://arstechnica.com/gadgets/2020/02/hamr-dont-hurt-em-laser-assisted-hard-drives-are-coming-in-2020/">HAMR don’t hurt ’em: laser-assisted hard drives are coming in 2020</a> &mdash; Although the 2012 "just around the corner" HAMR drives seem to have been mostly vapor, the technology is a reality now. Seagate has been trialing 16TB HAMR drives with select customers for more than a year and claims that the trials have proved that its HAMR drives are "plug and play replacements" for traditional CMR drives, requiring no special care and having no particular poor use cases compared to the drives we're all used to.</li><li><a title="HAMR Milestone: Seagate Achieves 16TB Capacity on Internal HAMR Test Units" rel="nofollow" href="https://blog.seagate.com/craftsman-ship/hamr-milestone-seagate-achieves-16tb-capacity-on-internal-hamr-test-units/">HAMR Milestone: Seagate Achieves 16TB Capacity on Internal HAMR Test Units</a></li><li><a title="Western Digital debuts 18TB and 20TB near-MAMR disk drives" rel="nofollow" href="https://blocksandfiles.com/2019/09/03/western-digital-18tb-and-20tb-mamr-disk-drives/">Western Digital debuts 18TB and 20TB near-MAMR disk drives</a></li><li><a title="Previously on TechSNAP 341: HAMR Time" rel="nofollow" href="https://techsnap.systems/341">Previously on TechSNAP 341: HAMR Time</a> &mdash; We've got bad news for Wifi-lovers as the KRACK hack takes the world by storm; We have the details &amp; some places to watch to make sure you stay patched. Plus, some distressing revelations about third party access to your personal information through some US mobile carriers. Then we cover the ongoing debate over HAMR, MAMR, and the future of hard drive technology &amp; take a mini deep dive into the world of elliptic curve cryptography.

</li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>We explore the potential of heat-assisted magnetic recording and get excited about a possibly persistent L2ARC. </p>

<p>Plus Jim&#39;s journeys with Clear Linux, and why Ubuntu 18.04.4 is a maintenance release worth talking about.</p><p>Links:</p><ul><li><a title="Ubuntu 18.04.4 LTS: here&#39;s what&#39;s new" rel="nofollow" href="https://arstechnica.com/gadgets/2020/02/ubuntu-18-04-4-lts-released-wednesday-heres-whats-new/">Ubuntu 18.04.4 LTS: here's what's new</a> &mdash; It's not as shiny and exciting as entirely new versions, of course, but it does pack in some worthwhile security and bugfix upgrades, as well as support for more and newer hardware.</li><li><a title="18.04.4 - Ubuntu Wiki" rel="nofollow" href="https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes/ChangeSummary/18.04.4">18.04.4 - Ubuntu Wiki</a></li><li><a title="MobaXterm" rel="nofollow" href="https://mobaxterm.mobatek.net/">MobaXterm</a> &mdash; Enhanced terminal for Windows with X11 server, tabbed SSH client, network tools and much more.</li><li><a title="Linux distro review: Intel’s own Clear Linux OS" rel="nofollow" href="https://arstechnica.com/gadgets/2020/02/linux-distro-review-intels-own-clear-linux-os/?comments=1">Linux distro review: Intel’s own Clear Linux OS</a> &mdash; There's not much question that Clear Linux is your best bet if you want to turn in the best possible benchmark numbers. The question not addressed here is, what's it like to run Clear Linux as a daily driver? We were curious, so we took it for a spin.</li><li><a title="Clear Linux* Project" rel="nofollow" href="https://clearlinux.org/">Clear Linux* Project</a> &mdash; Clear Linux OS is an open source, rolling release Linux distribution optimized for performance and security, from the Cloud to the Edge, designed for customization, and manageability.</li><li><a title="swupd — Documentation for Clear Linux* project" rel="nofollow" href="https://docs.01.org/clearlinux/latest/guides/clear/swupd.html">swupd — Documentation for Clear Linux* project</a></li><li><a title="clr-boot-manager: Kernel &amp; Boot Loader Management" rel="nofollow" href="https://github.com/clearlinux/clr-boot-manager">clr-boot-manager: Kernel &amp; Boot Loader Management</a></li><li><a title="Cannot compile zfs for 5.5-rc2 · Issue #9745 · zfsonlinux/zfs" rel="nofollow" href="https://github.com/zfsonlinux/zfs/issues/9745">Cannot compile zfs for 5.5-rc2 · Issue #9745 · zfsonlinux/zfs</a></li><li><a title="Persistent L2ARC might be coming to ZFS on Linux" rel="nofollow" href="https://arstechnica.com/gadgets/2020/02/zfs-on-linux-should-get-a-persistent-ssd-read-cache-feature-soon/">Persistent L2ARC might be coming to ZFS on Linux</a> &mdash; The primary ARC is kept in system RAM, but an L2ARC device can be created from one or more fast disks. In a ZFS pool with one or more L2ARC devices, when blocks are evicted from the primary ARC in RAM, they are moved down to L2ARC rather than being thrown away entirely. In the past, this feature has been of limited value, both because indexing a large L2ARC occupies system RAM which could have been better used for primary ARC and because L2ARC was not persistent across reboots.</li><li><a title="Persistent L2ARC by gamanakis · Pull Request #9582 · zfsonlinux/zfs" rel="nofollow" href="https://github.com/zfsonlinux/zfs/pull/9582">Persistent L2ARC by gamanakis · Pull Request #9582 · zfsonlinux/zfs</a> &mdash; This feature implements a light-weight persistent L2ARC metadata structure that allows L2ARC contents to be recovered after a reboot. This significantly eases the impact a reboot has on read performance on systems with large caches.</li><li><a title="LINUX Unplugged 303: Stateless and Dateless" rel="nofollow" href="https://linuxunplugged.com/303">LINUX Unplugged 303: Stateless and Dateless</a> &mdash; We visit Intel to figure out what Clear Linux is all about and explain a few tricks that make it unique.</li><li><a title="LINUX Unplugged Blog: Clear Linux OS 2019" rel="nofollow" href="https://linuxunplugged.com/articles/clear-linux-os-2019">LINUX Unplugged Blog: Clear Linux OS 2019</a></li><li><a title="HAMR don’t hurt ’em: laser-assisted hard drives are coming in 2020" rel="nofollow" href="https://arstechnica.com/gadgets/2020/02/hamr-dont-hurt-em-laser-assisted-hard-drives-are-coming-in-2020/">HAMR don’t hurt ’em: laser-assisted hard drives are coming in 2020</a> &mdash; Although the 2012 "just around the corner" HAMR drives seem to have been mostly vapor, the technology is a reality now. Seagate has been trialing 16TB HAMR drives with select customers for more than a year and claims that the trials have proved that its HAMR drives are "plug and play replacements" for traditional CMR drives, requiring no special care and having no particular poor use cases compared to the drives we're all used to.</li><li><a title="HAMR Milestone: Seagate Achieves 16TB Capacity on Internal HAMR Test Units" rel="nofollow" href="https://blog.seagate.com/craftsman-ship/hamr-milestone-seagate-achieves-16tb-capacity-on-internal-hamr-test-units/">HAMR Milestone: Seagate Achieves 16TB Capacity on Internal HAMR Test Units</a></li><li><a title="Western Digital debuts 18TB and 20TB near-MAMR disk drives" rel="nofollow" href="https://blocksandfiles.com/2019/09/03/western-digital-18tb-and-20tb-mamr-disk-drives/">Western Digital debuts 18TB and 20TB near-MAMR disk drives</a></li><li><a title="Previously on TechSNAP 341: HAMR Time" rel="nofollow" href="https://techsnap.systems/341">Previously on TechSNAP 341: HAMR Time</a> &mdash; We've got bad news for Wifi-lovers as the KRACK hack takes the world by storm; We have the details &amp; some places to watch to make sure you stay patched. Plus, some distressing revelations about third party access to your personal information through some US mobile carriers. Then we cover the ongoing debate over HAMR, MAMR, and the future of hard drive technology &amp; take a mini deep dive into the world of elliptic curve cryptography.

</li></ul>]]>
  </itunes:summary>
</item>
<item>
  <title>408: Apollo's ARC</title>
  <link>https://techsnap.systems/408</link>
  <guid isPermaLink="false">2577b50c-e740-46c8-a75b-14f074cb812a</guid>
  <pubDate>Fri, 26 Jul 2019 00:15:00 -0700</pubDate>
  <author>Jupiter Broadcasting</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/95197d05-40d6-4e68-8e0b-2f586ce8dc55/2577b50c-e740-46c8-a75b-14f074cb812a.mp3" length="25365234" type="audio/mp3"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Jupiter Broadcasting</itunes:author>
  <itunes:subtitle>We take a look at the amazing abilities of the Apollo Guidance Computer and Jim breaks down everything you need to know about the ZFS ARC.</itunes:subtitle>
  <itunes:duration>35:13</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/9/95197d05-40d6-4e68-8e0b-2f586ce8dc55/cover.jpg?v=4"/>
  <description>We take a look at the amazing abilities of the Apollo Guidance Computer and Jim breaks down everything you need to know about the ZFS ARC. 
Plus an update on ZoL SIMD acceleration, your feedback, and an interesting new neuromorphic system from Intel. 
</description>
  <itunes:keywords>virtualization, openzfs, zfs, kvm, qemu, vhd, qcow, qcow2, ARC, memory, page cache, caching, ZFS on Linux, ZoL, SIMD, floating point, fpu, apollo, apollo anniversary, nasa, retro computing, magnetic core, core rope, AGC, apollo guidance computer, intel, dancing demon, kernel module, loihi, neuromorphic computing, text adventure, punch cards, Margaret Hamilton, neural networks, machine learning, ai, pohoiki, snapshots, sysadmin, trs-80, cloud, Chris Siebenmann,  DevOps, TechSNAP</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>We take a look at the amazing abilities of the Apollo Guidance Computer and Jim breaks down everything you need to know about the ZFS ARC. </p>

<p>Plus an update on ZoL SIMD acceleration, your feedback, and an interesting new neuromorphic system from Intel.</p><p>Links:</p><ul><li><a title="ZFS On Linux Has Figured Out A Way To Restore SIMD Support On Linux 5.0+" rel="nofollow" href="https://www.phoronix.com/scan.php?page=news_item&amp;px=ZFS-On-Linux-Restoring-SIMD">ZFS On Linux Has Figured Out A Way To Restore SIMD Support On Linux 5.0+</a> &mdash; Those running ZFS On Linux (ZoL) on post-5.0 (and pre-5.0 supported LTS releases) have seen big performance hits to the ZFS encryption performance in particular. That came due to upstream breaking an interface used by ZFS On Linux and admittedly not caring about ZoL due to it being an out-of-tree user. But now several kernel releases later, a workaround has been devised. </li><li><a title="ZFS On Linux Runs Into A Snag With Linux 5.0" rel="nofollow" href="https://www.phoronix.com/scan.php?page=news_item&amp;px=ZFS-On-Linux-5.0-Problem">ZFS On Linux Runs Into A Snag With Linux 5.0</a></li><li><a title="NixOS Takes Action After 1.2GB/s ZFS Encryption Speed Drops To 200MB/s With Linux 5.0+" rel="nofollow" href="https://www.phoronix.com/scan.php?page=news_item&amp;px=NixOS-Linux-5.0-ZFS-FPU-Drop">NixOS Takes Action After 1.2GB/s ZFS Encryption Speed Drops To 200MB/s With Linux 5.0+</a> &mdash;  A NixOS developer reports that the functions no longer exported by Linux 5.0+ and previously used by ZoL for AVX/AES-NI support end up dropping the ZFS data-set encryption performance to 200MB/s where as pre-5.0 kernels ran around 1.2GB/s</li><li><a title="Linux 5.0 compat: SIMD compatibility · zfsonlinux/zfs@e5db313" rel="nofollow" href="https://github.com/zfsonlinux/zfs/commit/e5db31349484e5e859c7a942eb15b98d68ce5b4d">Linux 5.0 compat: SIMD compatibility · zfsonlinux/zfs@e5db313</a> &mdash; Restore the SIMD optimization for 4.19.38 LTS, 4.14.120 LTS,
and 5.0 and newer kernels.  This is accomplished by leveraging
the fact that by definition dedicated kernel threads never need
to concern themselves with saving and restoring the user FPU state.
Therefore, they may use the FPU as long as we can guarantee user
tasks always restore their FPU state before context switching back
to user space.</li><li><a title="no SIMD acceleration · Issue #8793 · zfsonlinux/zfs" rel="nofollow" href="https://github.com/zfsonlinux/zfs/issues/8793">no SIMD acceleration · Issue #8793 · zfsonlinux/zfs</a> &mdash; 4.14.x, 4.19.x, 5.x all have no SIMD acceleration, it is like a turtle. very slow.

</li><li><a title="Chris&#39;s Wiki :: ZFS on Linux still has annoying issues with ARC size" rel="nofollow" href="https://utcc.utoronto.ca/~cks/space/blog/linux/ZFSOnLinuxARCShrinkage">Chris's Wiki :: ZFS on Linux still has annoying issues with ARC size</a> &mdash; One of the frustrating things about operating ZFS on Linux is that the ARC size is critical but ZFS's auto-tuning of it is opaque and apparently prone to malfunctions, where your ARC will mysteriously shrink drastically and then stick there.
</li><li><a title="Software woven into wire, Core rope and the Apollo Guidance Computer" rel="nofollow" href="http://www.righto.com/2019/07/software-woven-into-wire-core-rope-and.html">Software woven into wire, Core rope and the Apollo Guidance Computer</a> &mdash; One of the first computers to use integrated circuits, the Apollo Guidance Computer was lightweight enough and small enough to fly in space. An unusual feature that contributed to its small size was core rope memory, a technique of physically weaving software into high-density storage.</li><li><a title="Virtual Apollo Guidance Computer (AGC) software" rel="nofollow" href="https://github.com/virtualagc/virtualagc">Virtual Apollo Guidance Computer (AGC) software</a> &mdash; Since you are looking at this README file, you are in the "master" branch of the repository, which contains source-code transcriptions of the original Project Apollo software for the Apollo Guidance Computer (AGC) and Abort Guidance System (AGS), as well as our software for emulating the AGC, AGS, and some of their peripheral devices (such as the display-keyboard unit, or DSKY).</li><li><a title="The Underappreciated Power of the Apollo Computer - The Atlantic" rel="nofollow" href="https://www.theatlantic.com/science/archive/2019/07/underappreciated-power-apollo-computer/594121/">The Underappreciated Power of the Apollo Computer - The Atlantic</a> &mdash; Without the computers on board the Apollo spacecraft, there would have been no moon landing, no triumphant first step, no high-water mark for human space travel. A pilot could never have navigated the way to the moon, as if a spaceship were simply a more powerful airplane. The calculations required to make in-flight adjustments and the complexity of the thrust controls outstripped human capacities.</li><li><a title="Brains scale better than CPUs. So Intel is building brains | Ars Technica" rel="nofollow" href="https://arstechnica.com/science/2019/07/brains-scale-better-than-cpus-so-intel-is-building-brains/">Brains scale better than CPUs. So Intel is building brains | Ars Technica</a> &mdash; Neuromorphic engineering—building machines that mimic the function of organic brains in hardware as well as software—is becoming more and more prominent. The field has progressed rapidly, from conceptual beginnings in the late 1980s to experimental field programmable neural arrays in 2006, early memristor-powered device proposals in 2012, IBM's TrueNorth NPU in 2014, and Intel's Loihi neuromorphic processor in 2017. Yesterday, Intel broke a little more new ground with the debut of a larger-scale neuromorphic system, Pohoiki Beach, which integrates 64 of its Loihi chips.
</li><li><a title="Dancing Demon - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=6CCJFQ_bP0E">Dancing Demon - YouTube</a> &mdash; Written in 1979 by Leo Christopherson for the Radio Shack TRS-80 Model I computer. This is the best game ever for at that time.</li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>We take a look at the amazing abilities of the Apollo Guidance Computer and Jim breaks down everything you need to know about the ZFS ARC. </p>

<p>Plus an update on ZoL SIMD acceleration, your feedback, and an interesting new neuromorphic system from Intel.</p><p>Links:</p><ul><li><a title="ZFS On Linux Has Figured Out A Way To Restore SIMD Support On Linux 5.0+" rel="nofollow" href="https://www.phoronix.com/scan.php?page=news_item&amp;px=ZFS-On-Linux-Restoring-SIMD">ZFS On Linux Has Figured Out A Way To Restore SIMD Support On Linux 5.0+</a> &mdash; Those running ZFS On Linux (ZoL) on post-5.0 (and pre-5.0 supported LTS releases) have seen big performance hits to the ZFS encryption performance in particular. That came due to upstream breaking an interface used by ZFS On Linux and admittedly not caring about ZoL due to it being an out-of-tree user. But now several kernel releases later, a workaround has been devised. </li><li><a title="ZFS On Linux Runs Into A Snag With Linux 5.0" rel="nofollow" href="https://www.phoronix.com/scan.php?page=news_item&amp;px=ZFS-On-Linux-5.0-Problem">ZFS On Linux Runs Into A Snag With Linux 5.0</a></li><li><a title="NixOS Takes Action After 1.2GB/s ZFS Encryption Speed Drops To 200MB/s With Linux 5.0+" rel="nofollow" href="https://www.phoronix.com/scan.php?page=news_item&amp;px=NixOS-Linux-5.0-ZFS-FPU-Drop">NixOS Takes Action After 1.2GB/s ZFS Encryption Speed Drops To 200MB/s With Linux 5.0+</a> &mdash;  A NixOS developer reports that the functions no longer exported by Linux 5.0+ and previously used by ZoL for AVX/AES-NI support end up dropping the ZFS data-set encryption performance to 200MB/s where as pre-5.0 kernels ran around 1.2GB/s</li><li><a title="Linux 5.0 compat: SIMD compatibility · zfsonlinux/zfs@e5db313" rel="nofollow" href="https://github.com/zfsonlinux/zfs/commit/e5db31349484e5e859c7a942eb15b98d68ce5b4d">Linux 5.0 compat: SIMD compatibility · zfsonlinux/zfs@e5db313</a> &mdash; Restore the SIMD optimization for 4.19.38 LTS, 4.14.120 LTS,
and 5.0 and newer kernels.  This is accomplished by leveraging
the fact that by definition dedicated kernel threads never need
to concern themselves with saving and restoring the user FPU state.
Therefore, they may use the FPU as long as we can guarantee user
tasks always restore their FPU state before context switching back
to user space.</li><li><a title="no SIMD acceleration · Issue #8793 · zfsonlinux/zfs" rel="nofollow" href="https://github.com/zfsonlinux/zfs/issues/8793">no SIMD acceleration · Issue #8793 · zfsonlinux/zfs</a> &mdash; 4.14.x, 4.19.x, 5.x all have no SIMD acceleration, it is like a turtle. very slow.

</li><li><a title="Chris&#39;s Wiki :: ZFS on Linux still has annoying issues with ARC size" rel="nofollow" href="https://utcc.utoronto.ca/~cks/space/blog/linux/ZFSOnLinuxARCShrinkage">Chris's Wiki :: ZFS on Linux still has annoying issues with ARC size</a> &mdash; One of the frustrating things about operating ZFS on Linux is that the ARC size is critical but ZFS's auto-tuning of it is opaque and apparently prone to malfunctions, where your ARC will mysteriously shrink drastically and then stick there.
</li><li><a title="Software woven into wire, Core rope and the Apollo Guidance Computer" rel="nofollow" href="http://www.righto.com/2019/07/software-woven-into-wire-core-rope-and.html">Software woven into wire, Core rope and the Apollo Guidance Computer</a> &mdash; One of the first computers to use integrated circuits, the Apollo Guidance Computer was lightweight enough and small enough to fly in space. An unusual feature that contributed to its small size was core rope memory, a technique of physically weaving software into high-density storage.</li><li><a title="Virtual Apollo Guidance Computer (AGC) software" rel="nofollow" href="https://github.com/virtualagc/virtualagc">Virtual Apollo Guidance Computer (AGC) software</a> &mdash; Since you are looking at this README file, you are in the "master" branch of the repository, which contains source-code transcriptions of the original Project Apollo software for the Apollo Guidance Computer (AGC) and Abort Guidance System (AGS), as well as our software for emulating the AGC, AGS, and some of their peripheral devices (such as the display-keyboard unit, or DSKY).</li><li><a title="The Underappreciated Power of the Apollo Computer - The Atlantic" rel="nofollow" href="https://www.theatlantic.com/science/archive/2019/07/underappreciated-power-apollo-computer/594121/">The Underappreciated Power of the Apollo Computer - The Atlantic</a> &mdash; Without the computers on board the Apollo spacecraft, there would have been no moon landing, no triumphant first step, no high-water mark for human space travel. A pilot could never have navigated the way to the moon, as if a spaceship were simply a more powerful airplane. The calculations required to make in-flight adjustments and the complexity of the thrust controls outstripped human capacities.</li><li><a title="Brains scale better than CPUs. So Intel is building brains | Ars Technica" rel="nofollow" href="https://arstechnica.com/science/2019/07/brains-scale-better-than-cpus-so-intel-is-building-brains/">Brains scale better than CPUs. So Intel is building brains | Ars Technica</a> &mdash; Neuromorphic engineering—building machines that mimic the function of organic brains in hardware as well as software—is becoming more and more prominent. The field has progressed rapidly, from conceptual beginnings in the late 1980s to experimental field programmable neural arrays in 2006, early memristor-powered device proposals in 2012, IBM's TrueNorth NPU in 2014, and Intel's Loihi neuromorphic processor in 2017. Yesterday, Intel broke a little more new ground with the debut of a larger-scale neuromorphic system, Pohoiki Beach, which integrates 64 of its Loihi chips.
</li><li><a title="Dancing Demon - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=6CCJFQ_bP0E">Dancing Demon - YouTube</a> &mdash; Written in 1979 by Leo Christopherson for the Radio Shack TRS-80 Model I computer. This is the best game ever for at that time.</li></ul>]]>
  </itunes:summary>
</item>
<item>
  <title>402: Snapshot Sanity</title>
  <link>https://techsnap.systems/402</link>
  <guid isPermaLink="false">fbd74a16-dc81-4558-b87a-ff25a23a3669</guid>
  <pubDate>Thu, 25 Apr 2019 16:45:00 -0700</pubDate>
  <author>Jupiter Broadcasting</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/95197d05-40d6-4e68-8e0b-2f586ce8dc55/fbd74a16-dc81-4558-b87a-ff25a23a3669.mp3" length="22728016" type="audio/mp3"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Jupiter Broadcasting</itunes:author>
  <itunes:subtitle>We continue our take on ZFS as Jim and Wes dive in to snapshots, replication, and the magic on copy on write.</itunes:subtitle>
  <itunes:duration>31:33</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/9/95197d05-40d6-4e68-8e0b-2f586ce8dc55/cover.jpg?v=4"/>
  <description>We continue our take on ZFS as Jim and Wes dive in to snapshots, replication, and the magic on copy on write.
Plus some handy tools to manage your snapshots, rsync war stories, and more! 
</description>
  <itunes:keywords>zfs, openzfs, zfs on linux, ZoL, snapshots, replication, sanoid, syncoid, policy based, snapshot management, copy on write, functional filesystem, toml, linked list, data integrity, crash consistent, atomic, atomic snapshot, rsync, cron, filesystems, warstories, SysAdmin podcast, DevOps, TechSNAP</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>We continue our take on ZFS as Jim and Wes dive in to snapshots, replication, and the magic on copy on write.</p>

<p>Plus some handy tools to manage your snapshots, rsync war stories, and more!</p><p>Links:</p><ul><li><a title="sanoid: Policy-driven snapshot management and replication tools." rel="nofollow" href="https://github.com/jimsalterjrs/sanoid">sanoid: Policy-driven snapshot management and replication tools.</a> &mdash; Sanoid is a policy-driven snapshot management tool for ZFS filesystems. When combined with the Linux KVM hypervisor, you can use it to make your systems functionally immortal.

</li><li><a title="Syncoid" rel="nofollow" href="https://github.com/jimsalterjrs/sanoid#syncoid">Syncoid</a> &mdash; Sanoid also includes a replication tool, syncoid, which facilitates the asynchronous incremental replication of ZFS filesystems. </li><li><a title="Copy-on-write - Wikipedia" rel="nofollow" href="https://en.wikipedia.org/wiki/Copy-on-write">Copy-on-write - Wikipedia</a></li><li><a title="ZFS Paper" rel="nofollow" href="https://www.cpp.edu/~gkuri/classes/ece426/ZFS.pdf">ZFS Paper</a></li><li><a title="The Magic Behind APFS: Copy-On-Write" rel="nofollow" href="https://mac-optimization.bestreviews.net/the-magic-behind-apfs-copy-on-write/">The Magic Behind APFS: Copy-On-Write</a> &mdash; The brand-new Apple File System (APFS) that landed with macOS High Sierra brings a handful of important new features that rely on a technique called copy-on-write (CoW).</li><li><a title="Chapter 19. The Z File System (ZFS)" rel="nofollow" href="https://www.freebsd.org/doc/handbook/zfs.html">Chapter 19. The Z File System (ZFS)</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>We continue our take on ZFS as Jim and Wes dive in to snapshots, replication, and the magic on copy on write.</p>

<p>Plus some handy tools to manage your snapshots, rsync war stories, and more!</p><p>Links:</p><ul><li><a title="sanoid: Policy-driven snapshot management and replication tools." rel="nofollow" href="https://github.com/jimsalterjrs/sanoid">sanoid: Policy-driven snapshot management and replication tools.</a> &mdash; Sanoid is a policy-driven snapshot management tool for ZFS filesystems. When combined with the Linux KVM hypervisor, you can use it to make your systems functionally immortal.

</li><li><a title="Syncoid" rel="nofollow" href="https://github.com/jimsalterjrs/sanoid#syncoid">Syncoid</a> &mdash; Sanoid also includes a replication tool, syncoid, which facilitates the asynchronous incremental replication of ZFS filesystems. </li><li><a title="Copy-on-write - Wikipedia" rel="nofollow" href="https://en.wikipedia.org/wiki/Copy-on-write">Copy-on-write - Wikipedia</a></li><li><a title="ZFS Paper" rel="nofollow" href="https://www.cpp.edu/~gkuri/classes/ece426/ZFS.pdf">ZFS Paper</a></li><li><a title="The Magic Behind APFS: Copy-On-Write" rel="nofollow" href="https://mac-optimization.bestreviews.net/the-magic-behind-apfs-copy-on-write/">The Magic Behind APFS: Copy-On-Write</a> &mdash; The brand-new Apple File System (APFS) that landed with macOS High Sierra brings a handful of important new features that rely on a technique called copy-on-write (CoW).</li><li><a title="Chapter 19. The Z File System (ZFS)" rel="nofollow" href="https://www.freebsd.org/doc/handbook/zfs.html">Chapter 19. The Z File System (ZFS)</a></li></ul>]]>
  </itunes:summary>
</item>
<item>
  <title>401: Everyday ZFS</title>
  <link>https://techsnap.systems/401</link>
  <guid isPermaLink="false">ea1f89db-e748-47fd-b288-833a330704ce</guid>
  <pubDate>Thu, 11 Apr 2019 22:15:00 -0700</pubDate>
  <author>Jupiter Broadcasting</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/95197d05-40d6-4e68-8e0b-2f586ce8dc55/ea1f89db-e748-47fd-b288-833a330704ce.mp3" length="34263376" type="audio/mp3"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Jupiter Broadcasting</itunes:author>
  <itunes:subtitle>Jim and Wes sit down to bust some ZFS myths and share their tips and tricks for getting the most out of the ultimate filesystem.</itunes:subtitle>
  <itunes:duration>47:35</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/9/95197d05-40d6-4e68-8e0b-2f586ce8dc55/cover.jpg?v=4"/>
  <description>Jim and Wes sit down to bust some ZFS myths and share their tips and tricks for getting the most out of the ultimate filesystem.
Plus when not to use ZFS, the surprising way your disks are lying to you, and more! 
</description>
  <itunes:keywords>zfs, vdez, filesystems, sun microsystems, backups, snapshots, copy on write, throughput, iops, linux, GPL, CDDL, ZFS on Linux, ZoL, ashift, SSD, techSNAP, sysadmin podcast, DevOps, data integrity, checksum, ECC, hard drives, hard disks, FreeBSD, OpenZF S, Solaris, RAID, raidz, zfs on root, ubuntu, copyleft</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Jim and Wes sit down to bust some ZFS myths and share their tips and tricks for getting the most out of the ultimate filesystem.</p>

<p>Plus when not to use ZFS, the surprising way your disks are lying to you, and more!</p><p>Links:</p><ul><li><a title="ZFS - Ubuntu Wiki" rel="nofollow" href="https://wiki.ubuntu.com/ZFS">ZFS - Ubuntu Wiki</a> &mdash; ZFS is a combined file system and logical volume manager designed and implemented by a team at Sun Microsystems led by Jeff Bonwick and Matthew Ahrens.</li><li><a title="Performance tuning - OpenZFS" rel="nofollow" href="http://open-zfs.org/wiki/Performance_tuning#Alignment_shift">Performance tuning - OpenZFS</a> &mdash; Make sure that you create your pools such that the vdevs have the correct alignment shift for your storage device's size. if dealing with flash media, this is going to be either 12 (4K sectors) or 13 (8K sectors).</li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Jim and Wes sit down to bust some ZFS myths and share their tips and tricks for getting the most out of the ultimate filesystem.</p>

<p>Plus when not to use ZFS, the surprising way your disks are lying to you, and more!</p><p>Links:</p><ul><li><a title="ZFS - Ubuntu Wiki" rel="nofollow" href="https://wiki.ubuntu.com/ZFS">ZFS - Ubuntu Wiki</a> &mdash; ZFS is a combined file system and logical volume manager designed and implemented by a team at Sun Microsystems led by Jeff Bonwick and Matthew Ahrens.</li><li><a title="Performance tuning - OpenZFS" rel="nofollow" href="http://open-zfs.org/wiki/Performance_tuning#Alignment_shift">Performance tuning - OpenZFS</a> &mdash; Make sure that you create your pools such that the vdevs have the correct alignment shift for your storage device's size. if dealing with flash media, this is going to be either 12 (4K sectors) or 13 (8K sectors).</li></ul>]]>
  </itunes:summary>
</item>
<item>
  <title>396: Floating Point Problems</title>
  <link>https://techsnap.systems/396</link>
  <guid isPermaLink="false">bc968a3f-c804-4203-ae2b-dc43ef919218</guid>
  <pubDate>Thu, 31 Jan 2019 20:45:00 -0800</pubDate>
  <author>Jupiter Broadcasting</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/95197d05-40d6-4e68-8e0b-2f586ce8dc55/bc968a3f-c804-4203-ae2b-dc43ef919218.mp3" length="19582037" type="audio/mp3"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Jupiter Broadcasting</itunes:author>
  <itunes:subtitle>Jim and Wes are joined by OpenZFS developer Richard Yao to explain why the recent drama over Linux kernel 5.0 is no big deal, and how his fix for the underlying issue might actually make things faster.</itunes:subtitle>
  <itunes:duration>27:11</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/9/95197d05-40d6-4e68-8e0b-2f586ce8dc55/cover.jpg?v=4"/>
  <description>Jim and Wes are joined by OpenZFS developer Richard Yao to explain why the recent drama over Linux kernel 5.0 is no big deal, and how his fix for the underlying issue might actually make things faster.
Plus the nitty-gritty details of vectorized optimizations and kernel preemption, and our thoughts on the future of the relationship between ZFS and Linux. Special Guest: Richard Yao.
</description>
  <itunes:keywords>GPL, CDDL, Oracle, FPU, SIMD, vectorized instructions, AVX, hardware acceleration, journaling, data integrity, LFNW, floating point, checksum, snapshot, clone, FreeBSD, kernel module, header, software license, Linux, Multitasking, kernel preemption, OpenZFS, ZFS, ZoL, ZFS on Linux, Storage, RAID, ZVOL, SysAdmin podcast, DevOps, TechSNAP</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Jim and Wes are joined by OpenZFS developer Richard Yao to explain why the recent drama over Linux kernel 5.0 is no big deal, and how his fix for the underlying issue might actually make things faster.</p>

<p>Plus the nitty-gritty details of vectorized optimizations and kernel preemption, and our thoughts on the future of the relationship between ZFS and Linux.</p><p>Special Guest: Richard Yao.</p><p>Links:</p><ul><li><a title="LinuxFest Northwest 2019" rel="nofollow" href="https://linuxfestnorthwest.org/conferences/2019">LinuxFest Northwest 2019</a> &mdash; Join a bunch of JB hosts and community celebrating the 20th anniversary! </li><li><a title="Choose Linux" rel="nofollow" href="https://chooselinux.show/">Choose Linux</a> &mdash; The show that captures the excitement of discovering Linux.</li><li><a title="Linux 5.0: _kernel_fpu{begin,end} no longer exported" rel="nofollow" href="https://github.com/zfsonlinux/zfs/issues/8259">Linux 5.0: _kernel_fpu{begin,end} no longer exported</a> &mdash; The latest kernels removed the old compatibility headers.</li><li><a title="ZFS On Linux Landing Workaround For Linux 5.0 Kernel Support" rel="nofollow" href="https://www.phoronix.com/scan.php?page=news_item&amp;px=ZFS-On-Linux-5.0-Workaround">ZFS On Linux Landing Workaround For Linux 5.0 Kernel Support</a> &mdash; So while these symbols are important for SIMD vectorized checksums for ZFS in the name of performance, with Linux 5.0+ they are not going to be exported for use by non-GPL modules. ZFS On Linux developer Tony Hutter has now staged a change that would disable vector instructions on Linux 5.0+ kernels.</li><li><a title="Re: x86/fpu: Don&#39;t export __kernel_fpu_{begin,end}()" rel="nofollow" href="https://marc.info/?l=linux-kernel&amp;m=154714516832389">Re: x86/fpu: Don't export __kernel_fpu_{begin,end}()</a> &mdash; My tolerance for ZFS is pretty non-existant.  Sun explicitly did not want their code to work on Linux, so why would we do extra work to get their code to work properly?</li><li><a title="The future of ZFS in FreeBSD" rel="nofollow" href="https://lists.freebsd.org/pipermail/freebsd-current/2018-December/072422.html">The future of ZFS in FreeBSD</a> &mdash; This state of affairs has led to a general agreement among the stakeholders that I have spoken to that it makes sense to rebase FreeBSD's ZFS on ZoL. Brian Behlendorf has graciously encouraged me to add FreeBSD support directly so that we might all have a singleshared code base.</li><li><a title="Dephix: Kickoff to The Future" rel="nofollow" href="https://www.delphix.com/blog/kickoff-future-eko-2018">Dephix: Kickoff to The Future</a> &mdash; OpenZFS has grown over the last decade, and delivering our application on Linux provides great OpenZFS support while enabling higher velocity adoption of new environments.</li><li><a title="The future of ZFS on Linux [zfs-discuss] " rel="nofollow" href="http://list.zfsonlinux.org/pipermail/zfs-discuss/2019-January/033300.html">The future of ZFS on Linux [zfs-discuss] </a> &mdash; 
Do you realize that we don’t actually need the symbols that the kernel removed. It All they do is save/restore of register state while turning off/on preemption. Nothing stops us from doing that ourselves. It is possible to implement our own substitutes using code from either Illumos or FreeBSD or even write our own. 

Honestly, I am beginning to think that my attempt to compromise with mainline gave the wrong impression. I am simply tired of this behavior by them and felt like reaching out to put an end to it. In a few weeks, we will likely be running on Linux 5.0 as if those symbols had never been removed because we will almost certainly have our own substitutes for them. Having to bloat our code because mainline won’t give us access to trivial functionality is annoying, but it is not the end of the world.</li><li><a title="LINUX Unplugged Episode 284: Free as in Get Out" rel="nofollow" href="https://linuxunplugged.com/284">LINUX Unplugged Episode 284: Free as in Get Out</a></li><li><a title="BSD Now 279: Future of ZFS" rel="nofollow" href="https://www.bsdnow.tv/episodes/2019_01_02-future_of_zfs">BSD Now 279: Future of ZFS</a></li><li><a title="BSD Now 157: ZFS, The “Universal” File-system" rel="nofollow" href="https://www.bsdnow.tv/episodes/2016_08_31-the_universal_filesystem">BSD Now 157: ZFS, The “Universal” File-system</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Jim and Wes are joined by OpenZFS developer Richard Yao to explain why the recent drama over Linux kernel 5.0 is no big deal, and how his fix for the underlying issue might actually make things faster.</p>

<p>Plus the nitty-gritty details of vectorized optimizations and kernel preemption, and our thoughts on the future of the relationship between ZFS and Linux.</p><p>Special Guest: Richard Yao.</p><p>Links:</p><ul><li><a title="LinuxFest Northwest 2019" rel="nofollow" href="https://linuxfestnorthwest.org/conferences/2019">LinuxFest Northwest 2019</a> &mdash; Join a bunch of JB hosts and community celebrating the 20th anniversary! </li><li><a title="Choose Linux" rel="nofollow" href="https://chooselinux.show/">Choose Linux</a> &mdash; The show that captures the excitement of discovering Linux.</li><li><a title="Linux 5.0: _kernel_fpu{begin,end} no longer exported" rel="nofollow" href="https://github.com/zfsonlinux/zfs/issues/8259">Linux 5.0: _kernel_fpu{begin,end} no longer exported</a> &mdash; The latest kernels removed the old compatibility headers.</li><li><a title="ZFS On Linux Landing Workaround For Linux 5.0 Kernel Support" rel="nofollow" href="https://www.phoronix.com/scan.php?page=news_item&amp;px=ZFS-On-Linux-5.0-Workaround">ZFS On Linux Landing Workaround For Linux 5.0 Kernel Support</a> &mdash; So while these symbols are important for SIMD vectorized checksums for ZFS in the name of performance, with Linux 5.0+ they are not going to be exported for use by non-GPL modules. ZFS On Linux developer Tony Hutter has now staged a change that would disable vector instructions on Linux 5.0+ kernels.</li><li><a title="Re: x86/fpu: Don&#39;t export __kernel_fpu_{begin,end}()" rel="nofollow" href="https://marc.info/?l=linux-kernel&amp;m=154714516832389">Re: x86/fpu: Don't export __kernel_fpu_{begin,end}()</a> &mdash; My tolerance for ZFS is pretty non-existant.  Sun explicitly did not want their code to work on Linux, so why would we do extra work to get their code to work properly?</li><li><a title="The future of ZFS in FreeBSD" rel="nofollow" href="https://lists.freebsd.org/pipermail/freebsd-current/2018-December/072422.html">The future of ZFS in FreeBSD</a> &mdash; This state of affairs has led to a general agreement among the stakeholders that I have spoken to that it makes sense to rebase FreeBSD's ZFS on ZoL. Brian Behlendorf has graciously encouraged me to add FreeBSD support directly so that we might all have a singleshared code base.</li><li><a title="Dephix: Kickoff to The Future" rel="nofollow" href="https://www.delphix.com/blog/kickoff-future-eko-2018">Dephix: Kickoff to The Future</a> &mdash; OpenZFS has grown over the last decade, and delivering our application on Linux provides great OpenZFS support while enabling higher velocity adoption of new environments.</li><li><a title="The future of ZFS on Linux [zfs-discuss] " rel="nofollow" href="http://list.zfsonlinux.org/pipermail/zfs-discuss/2019-January/033300.html">The future of ZFS on Linux [zfs-discuss] </a> &mdash; 
Do you realize that we don’t actually need the symbols that the kernel removed. It All they do is save/restore of register state while turning off/on preemption. Nothing stops us from doing that ourselves. It is possible to implement our own substitutes using code from either Illumos or FreeBSD or even write our own. 

Honestly, I am beginning to think that my attempt to compromise with mainline gave the wrong impression. I am simply tired of this behavior by them and felt like reaching out to put an end to it. In a few weeks, we will likely be running on Linux 5.0 as if those symbols had never been removed because we will almost certainly have our own substitutes for them. Having to bloat our code because mainline won’t give us access to trivial functionality is annoying, but it is not the end of the world.</li><li><a title="LINUX Unplugged Episode 284: Free as in Get Out" rel="nofollow" href="https://linuxunplugged.com/284">LINUX Unplugged Episode 284: Free as in Get Out</a></li><li><a title="BSD Now 279: Future of ZFS" rel="nofollow" href="https://www.bsdnow.tv/episodes/2019_01_02-future_of_zfs">BSD Now 279: Future of ZFS</a></li><li><a title="BSD Now 157: ZFS, The “Universal” File-system" rel="nofollow" href="https://www.bsdnow.tv/episodes/2016_08_31-the_universal_filesystem">BSD Now 157: ZFS, The “Universal” File-system</a></li></ul>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
