<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web01.fireside.fm</fireside:hostname>
    <fireside:genDate>Mon, 20 Apr 2026 14:28:28 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>TechSNAP - Episodes Tagged with “Passwords”</title>
    <link>https://techsnap.systems/tags/passwords</link>
    <pubDate>Fri, 15 Mar 2019 19:30:00 -0700</pubDate>
    <description>Systems, Network, and Administration Podcast. Every two weeks TechSNAP covers the stories that impact those of us in the tech industry, and all of us that follow it. Every episode we dedicate a portion of the show to answer audience questions, discuss best practices, and solving your problems.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Systems, Network, and Administration Podcast. </itunes:subtitle>
    <itunes:author>Jupiter Broadcasting</itunes:author>
    <itunes:summary>Systems, Network, and Administration Podcast. Every two weeks TechSNAP covers the stories that impact those of us in the tech industry, and all of us that follow it. Every episode we dedicate a portion of the show to answer audience questions, discuss best practices, and solving your problems.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/9/95197d05-40d6-4e68-8e0b-2f586ce8dc55/cover.jpg?v=4"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:owner>
      <itunes:name>Jupiter Broadcasting</itunes:name>
      <itunes:email>chris@jupiterbroadcasting.com</itunes:email>
    </itunes:owner>
<itunes:category text="News">
  <itunes:category text="Tech News"/>
</itunes:category>
<item>
  <title>399: Ethics in AI</title>
  <link>https://techsnap.systems/399</link>
  <guid isPermaLink="false">6a9e036e-abe5-4b0c-b727-2d3dab34ce1d</guid>
  <pubDate>Fri, 15 Mar 2019 19:30:00 -0700</pubDate>
  <author>Jupiter Broadcasting</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/95197d05-40d6-4e68-8e0b-2f586ce8dc55/6a9e036e-abe5-4b0c-b727-2d3dab34ce1d.mp3" length="27942893" type="audio/mp3"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Jupiter Broadcasting</itunes:author>
  <itunes:subtitle>Machine learning promises to change many industries, but with these changes come dangerous new risks. Join Jim and Wes as they explore some of the surprising ways bias can creep in and the serious consequences of ignoring these problems.</itunes:subtitle>
  <itunes:duration>38:48</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/9/95197d05-40d6-4e68-8e0b-2f586ce8dc55/cover.jpg?v=4"/>
  <description>Machine learning promises to change many industries, but with these changes come dangerous new risks. Join Jim and Wes as they explore some of the surprising ways bias can creep in and the serious consequences of ignoring these problems. 
</description>
  <itunes:keywords>machine learning, AI, expert systems, supervised learning, unsupervised learning, neural networks, bias, racism, zo, tay, reinforcement learning, python, algorithms, programming, data, privacy, server builds, plaintext offenders, CivicPlus, passwords, computer vision, natural language processing, classification, GloVe, word2vec, scikit-learn, Robyn Speer, ConceptNet, SysAdmin podcast, DevOps, TechSNAP, chatbot</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Machine learning promises to change many industries, but with these changes come dangerous new risks. Join Jim and Wes as they explore some of the surprising ways bias can creep in and the serious consequences of ignoring these problems.</p><p>Links:</p><ul><li><a title="Microsoft’s neo-Nazi sexbot was a great lesson for makers of AI assistants" rel="nofollow" href="https://www.technologyreview.com/s/610634/microsofts-neo-nazi-sexbot-was-a-great-lesson-for-makers-of-ai-assistants/">Microsoft’s neo-Nazi sexbot was a great lesson for makers of AI assistants</a> &mdash; What started out as an entertaining social experiment—get regular people to talk to a chatbot so it could learn while they, hopefully, had fun—became a nightmare for Tay’s creators. Users soon figured out how to make Tay say awful things. Microsoft took the chatbot offline after less than a day.</li><li><a title="Microsoft&#39;s Zo chatbot is a politically correct version of her sister Tay—except she’s much, much worse" rel="nofollow" href="https://qz.com/1340990/microsofts-politically-correct-chat-bot-is-even-worse-than-its-racist-one/">Microsoft's Zo chatbot is a politically correct version of her sister Tay—except she’s much, much worse</a> &mdash; A few months after Tay’s disastrous debut, Microsoft quietly released Zo, a second English-language chatbot available on Messenger, Kik, Skype, Twitter, and Groupme.</li><li><a title="How to make a racist AI without really trying | ConceptNet blog" rel="nofollow" href="http://blog.conceptnet.io/posts/2017/how-to-make-a-racist-ai-without-really-trying/">How to make a racist AI without really trying | ConceptNet blog</a> &mdash; Some people expect that fighting algorithmic racism is going to come with some sort of trade-off. There’s no trade-off here. You can have data that’s better and less racist. You can have data that’s better because it’s less racist. There was never anything “accurate” about the overt racism that word2vec and GloVe learned.</li><li><a title="Microsoft warned investors that biased or flawed AI could hurt the company’s image" rel="nofollow" href="https://qz.com/1542377/microsoft-warned-investors-that-biased-or-flawed-ai-could-hurt-the-companys-image/">Microsoft warned investors that biased or flawed AI could hurt the company’s image</a> &mdash; Notably, this addition comes after a research paper by MIT Media Lab graduate researcher Joy Buolamwini showed in February 2018 that Microsoft’s facial recognition algorithm’s was less accurate for women and people of color. In response, Microsoft updated its facial recognition models, and wrote a blog post about how it was addressing bias in its software.</li><li><a title="AI bias: It is the responsibility of humans to ensure fairness" rel="nofollow" href="https://www.information-age.com/ai-bias-123479217/">AI bias: It is the responsibility of humans to ensure fairness</a> &mdash; Amazon recently pulled the plug on its experimental AI-powered recruitment engine when it was discovered that the machine learning technology behind it was exhibiting bias against female applicants.</li><li><a title="California Police Using AI Program That Tells Them Where to Patrol, Critics Say It May Just Reinforce Racial Bias" rel="nofollow" href="https://www.newsweek.com/california-police-artificial-intelligence-predictive-policing-predpol-santa-1358508">California Police Using AI Program That Tells Them Where to Patrol, Critics Say It May Just Reinforce Racial Bias</a> &mdash; “The potential for bias to creep into the deployment of the tools is enormous. Simply put, the devil is in the data,” Vincent Southerland, executive director of the Center on Race, Inequality, and the Law at NYU School of Law, wrote for the American Civil Liberties Union last year.

</li><li><a title="A.I. Could Worsen Health Disparities" rel="nofollow" href="https://www.nytimes.com/2019/01/31/opinion/ai-bias-healthcare.html">A.I. Could Worsen Health Disparities</a> &mdash; A recent study found that some facial recognition programs incorrectly classify less than 1 percent of light-skinned men but more than one-third of dark-skinned women. What happens when we rely on such algorithms to diagnose melanoma on light versus dark skin?</li><li><a title="Responsible AI Practices" rel="nofollow" href="https://ai.google/education/responsible-ai-practices">Responsible AI Practices</a> &mdash; These questions are far from solved, and in fact are active areas of research and development. Google is committed to making progress in the responsible development of AI and to sharing knowledge, research, tools, datasets, and other resources with the larger community. Below we share some of our current work and recommended practices.</li><li><a title="The Ars Technica System Guide, Winter 2019: The one about the servers" rel="nofollow" href="https://arstechnica.com/gadgets/2019/03/the-ars-technica-system-guide-winter-2019-the-one-about-the-servers/">The Ars Technica System Guide, Winter 2019: The one about the servers</a> &mdash; The Winter 2019 Ars System Guide has returned to its roots: showing readers three real-world system builds we like at this precise moment in time. Instead of general performance desktops, this time around we're going to focus specifically on building some servers.</li><li><a title="Introduction to Python Development at Linux Academy" rel="nofollow" href="https://linuxacademy.com/devops/training/course/name/intro-to-python-development?utm_source=social&amp;utm_medium=twitter&amp;utm_campaign=2019_aprilcourselaunch">Introduction to Python Development at Linux Academy</a> &mdash; This course is designed to teach you how to program using Python. We'll cover the building blocks of the language, programming design fundamentals, how to use the standard library, third-party packages, and how to create Python projects. In the end, you should have a grasp of how to program.</li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Machine learning promises to change many industries, but with these changes come dangerous new risks. Join Jim and Wes as they explore some of the surprising ways bias can creep in and the serious consequences of ignoring these problems.</p><p>Links:</p><ul><li><a title="Microsoft’s neo-Nazi sexbot was a great lesson for makers of AI assistants" rel="nofollow" href="https://www.technologyreview.com/s/610634/microsofts-neo-nazi-sexbot-was-a-great-lesson-for-makers-of-ai-assistants/">Microsoft’s neo-Nazi sexbot was a great lesson for makers of AI assistants</a> &mdash; What started out as an entertaining social experiment—get regular people to talk to a chatbot so it could learn while they, hopefully, had fun—became a nightmare for Tay’s creators. Users soon figured out how to make Tay say awful things. Microsoft took the chatbot offline after less than a day.</li><li><a title="Microsoft&#39;s Zo chatbot is a politically correct version of her sister Tay—except she’s much, much worse" rel="nofollow" href="https://qz.com/1340990/microsofts-politically-correct-chat-bot-is-even-worse-than-its-racist-one/">Microsoft's Zo chatbot is a politically correct version of her sister Tay—except she’s much, much worse</a> &mdash; A few months after Tay’s disastrous debut, Microsoft quietly released Zo, a second English-language chatbot available on Messenger, Kik, Skype, Twitter, and Groupme.</li><li><a title="How to make a racist AI without really trying | ConceptNet blog" rel="nofollow" href="http://blog.conceptnet.io/posts/2017/how-to-make-a-racist-ai-without-really-trying/">How to make a racist AI without really trying | ConceptNet blog</a> &mdash; Some people expect that fighting algorithmic racism is going to come with some sort of trade-off. There’s no trade-off here. You can have data that’s better and less racist. You can have data that’s better because it’s less racist. There was never anything “accurate” about the overt racism that word2vec and GloVe learned.</li><li><a title="Microsoft warned investors that biased or flawed AI could hurt the company’s image" rel="nofollow" href="https://qz.com/1542377/microsoft-warned-investors-that-biased-or-flawed-ai-could-hurt-the-companys-image/">Microsoft warned investors that biased or flawed AI could hurt the company’s image</a> &mdash; Notably, this addition comes after a research paper by MIT Media Lab graduate researcher Joy Buolamwini showed in February 2018 that Microsoft’s facial recognition algorithm’s was less accurate for women and people of color. In response, Microsoft updated its facial recognition models, and wrote a blog post about how it was addressing bias in its software.</li><li><a title="AI bias: It is the responsibility of humans to ensure fairness" rel="nofollow" href="https://www.information-age.com/ai-bias-123479217/">AI bias: It is the responsibility of humans to ensure fairness</a> &mdash; Amazon recently pulled the plug on its experimental AI-powered recruitment engine when it was discovered that the machine learning technology behind it was exhibiting bias against female applicants.</li><li><a title="California Police Using AI Program That Tells Them Where to Patrol, Critics Say It May Just Reinforce Racial Bias" rel="nofollow" href="https://www.newsweek.com/california-police-artificial-intelligence-predictive-policing-predpol-santa-1358508">California Police Using AI Program That Tells Them Where to Patrol, Critics Say It May Just Reinforce Racial Bias</a> &mdash; “The potential for bias to creep into the deployment of the tools is enormous. Simply put, the devil is in the data,” Vincent Southerland, executive director of the Center on Race, Inequality, and the Law at NYU School of Law, wrote for the American Civil Liberties Union last year.

</li><li><a title="A.I. Could Worsen Health Disparities" rel="nofollow" href="https://www.nytimes.com/2019/01/31/opinion/ai-bias-healthcare.html">A.I. Could Worsen Health Disparities</a> &mdash; A recent study found that some facial recognition programs incorrectly classify less than 1 percent of light-skinned men but more than one-third of dark-skinned women. What happens when we rely on such algorithms to diagnose melanoma on light versus dark skin?</li><li><a title="Responsible AI Practices" rel="nofollow" href="https://ai.google/education/responsible-ai-practices">Responsible AI Practices</a> &mdash; These questions are far from solved, and in fact are active areas of research and development. Google is committed to making progress in the responsible development of AI and to sharing knowledge, research, tools, datasets, and other resources with the larger community. Below we share some of our current work and recommended practices.</li><li><a title="The Ars Technica System Guide, Winter 2019: The one about the servers" rel="nofollow" href="https://arstechnica.com/gadgets/2019/03/the-ars-technica-system-guide-winter-2019-the-one-about-the-servers/">The Ars Technica System Guide, Winter 2019: The one about the servers</a> &mdash; The Winter 2019 Ars System Guide has returned to its roots: showing readers three real-world system builds we like at this precise moment in time. Instead of general performance desktops, this time around we're going to focus specifically on building some servers.</li><li><a title="Introduction to Python Development at Linux Academy" rel="nofollow" href="https://linuxacademy.com/devops/training/course/name/intro-to-python-development?utm_source=social&amp;utm_medium=twitter&amp;utm_campaign=2019_aprilcourselaunch">Introduction to Python Development at Linux Academy</a> &mdash; This course is designed to teach you how to program using Python. We'll cover the building blocks of the language, programming design fundamentals, how to use the standard library, third-party packages, and how to create Python projects. In the end, you should have a grasp of how to program.</li></ul>]]>
  </itunes:summary>
</item>
<item>
  <title>398: Proper Password Procedures</title>
  <link>https://techsnap.systems/398</link>
  <guid isPermaLink="false">9c4e48b3-6aef-470f-82d5-d954c5bca39a</guid>
  <pubDate>Thu, 28 Feb 2019 18:00:00 -0800</pubDate>
  <author>Jupiter Broadcasting</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/95197d05-40d6-4e68-8e0b-2f586ce8dc55/9c4e48b3-6aef-470f-82d5-d954c5bca39a.mp3" length="22603569" type="audio/mp3"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Jupiter Broadcasting</itunes:author>
  <itunes:subtitle>We reveal the shady password practices that are all too common at many utility providers, and hash out why salts are essential to proper password storage.</itunes:subtitle>
  <itunes:duration>31:23</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/9/95197d05-40d6-4e68-8e0b-2f586ce8dc55/cover.jpg?v=4"/>
  <description>We reveal the shady password practices that are all too common at many utility providers, and hash out why salts are essential to proper password storage.
Plus the benefits of passphrases, and what you can do to keep your local providers on the up and up. 
</description>
  <itunes:keywords>Passwords, Password Salt, Cryptography, Cryptographic Hash, Utility, power company, SEDC, OWASP, entropy, password manager, plaintext, hashing algorithms, bcrypt, scrypt, pbkdf2, encryption, keepass, lastpass, 1password, offline encryption, PCI-DSS, standards, compliance, ethics, burp intruder, pivot, security, security research, software development, cracking, rainbow tables, brute force, SysAdmin podcast, DevOps, TechSNAP</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>We reveal the shady password practices that are all too common at many utility providers, and hash out why salts are essential to proper password storage.</p>

<p>Plus the benefits of passphrases, and what you can do to keep your local providers on the up and up.</p><p>Links:</p><ul><li><a title="Plain wrong: Millions of utility customers’ passwords stored in plain text | Ars Technica" rel="nofollow" href="https://arstechnica.com/tech-policy/2019/02/plain-wrong-millions-of-utility-customers-passwords-stored-in-plain-text/">Plain wrong: Millions of utility customers’ passwords stored in plain text | Ars Technica</a> &mdash; In September of 2018, an anonymous independent security researcher (who we'll call X) noticed that their power company's website was offering to email—not reset!—lost account passwords to forgetful users. Startled, X fed the online form the utility account number and the last four phone number digits it was asking for. Sure enough, a few minutes later the account password, in plain text, was sitting in X's inbox.</li><li><a title="The LinkedIn Hack: Understanding Why It Was So Easy to Crack the Passwords |" rel="nofollow" href="https://inspiredelearning.com/blog/the-linkedin-hack-understanding-why-it-was-so-easy-to-crack-the-passwords-2/">The LinkedIn Hack: Understanding Why It Was So Easy to Crack the Passwords |</a> &mdash; LinkedIn stated that after the initial 2012 breach, they added enhanced protection, most likely adding the “salt” functionality to their passwords. However, if you have not changed your password since 2012, you do not have the added protection of a salted password hash. You may be asking yourself–what on earth are hashing and salting and how does this all work?</li><li><a title="How Developers got Password Security so Wrong" rel="nofollow" href="https://blog.cloudflare.com/how-developers-got-password-security-so-wrong/">How Developers got Password Security so Wrong</a> &mdash; As time has gone on; developers have continued to store passwords insecurely, and users have continued to set them weakly. Despite this, no viable alternative has been created for password security.</li><li><a title="Adding Salt to Hashing: A Better Way to Store Passwords" rel="nofollow" href="https://auth0.com/blog/adding-salt-to-hashing-a-better-way-to-store-passwords/">Adding Salt to Hashing: A Better Way to Store Passwords</a> &mdash; A salt is added to the hashing process to force their uniqueness, increase their complexity without increasing user requirements, and to mitigate password attacks like rainbow tables.

</li><li><a title="Why Do Developers Get Password Storage Wrong? A Qualitative Usability Study" rel="nofollow" href="https://arxiv.org/abs/1708.08759">Why Do Developers Get Password Storage Wrong? A Qualitative Usability Study</a> &mdash; We were interested in exploring two particular aspects: Firstly, do developers get things wrong because they do not think about security and thus do not include security features (but could if they wanted to)? Or do they write insecure code because the complexity of the task is too great for them? Secondly, a common suggestion to increase security is to offer secure defaults.</li><li><a title="OWASP Password Storage Cheatsheet" rel="nofollow" href="https://github.com/OWASP/CheatSheetSeries/blob/master/cheatsheets/Password_Storage_Cheat_Sheet.md">OWASP Password Storage Cheatsheet</a> &mdash; This article provides guidance on properly storing passwords, secret question responses, and similar credential information.</li><li><a title="Secure Salted Password Hashing - How to do it Properly" rel="nofollow" href="https://crackstation.net/hashing-security.htm">Secure Salted Password Hashing - How to do it Properly</a> &mdash; If you're a web developer, you've probably had to make a user account system. The most important aspect of a user account system is how user passwords are protected. User account databases are hacked frequently, so you absolutely must do something to protect your users' passwords if your website is ever breached. The best way to protect passwords is to employ salted password hashing. This page will explain why it's done the way it is.</li><li><a title="Plain Text Offenders" rel="nofollow" href="http://plaintextoffenders.com/">Plain Text Offenders</a> &mdash; We’re tired of websites abusing our trust and storing our passwords in plain text, exposing us to danger. Here we put websites we believe to be practicing this to shame.</li><li><a title="Cybersecurity 101: Why you need to use a password manager | TechCrunch" rel="nofollow" href="https://techcrunch.com/2018/12/25/cybersecurity-101-guide-password-manager/">Cybersecurity 101: Why you need to use a password manager | TechCrunch</a> &mdash; Think of a password manager like a book of your passwords, locked by a master key that only you know.</li><li><a title="On the Security of Password Managers - Schneier on Security" rel="nofollow" href="https://www.schneier.com/blog/archives/2019/02/on_the_security_1.html">On the Security of Password Managers - Schneier on Security</a> &mdash; There's new research on the security of password managers, specifically 1Password, Dashlane, KeePass, and Lastpass. This work specifically looks at password leakage on the host computer. That is, does the password manager accidentally leave plaintext copies of the password lying around memory?</li><li><a title="LinuxFest Northwest 2019" rel="nofollow" href="https://linuxfestnorthwest.org/conferences/2019">LinuxFest Northwest 2019</a> &mdash; It's the 20th anniversary of LinuxFest Northwest! Come join your favorite Jupiter Broadcasting hosts at the Pacific Northwest's premier Linux event.</li><li><a title="SCALE 17x" rel="nofollow" href="https://www.socallinuxexpo.org/scale/17x">SCALE 17x</a> &mdash; The 17th annual Southern California Linux Expo – will take place on March. 7-10, 2019, at the Pasadena Convention Center. SCaLE 17x expects to host 150 exhibitors this year, along with nearly 130 sessions, tutorials and special events.</li><li><a title="Jupiter Broadcasting Meetups" rel="nofollow" href="https://www.meetup.com/jupiterbroadcasting/">Jupiter Broadcasting Meetups</a> &mdash; The best place to find out when Jupiter Broadcasting has a meetup near you! Also stay tuned for upcoming virtual study groups.</li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>We reveal the shady password practices that are all too common at many utility providers, and hash out why salts are essential to proper password storage.</p>

<p>Plus the benefits of passphrases, and what you can do to keep your local providers on the up and up.</p><p>Links:</p><ul><li><a title="Plain wrong: Millions of utility customers’ passwords stored in plain text | Ars Technica" rel="nofollow" href="https://arstechnica.com/tech-policy/2019/02/plain-wrong-millions-of-utility-customers-passwords-stored-in-plain-text/">Plain wrong: Millions of utility customers’ passwords stored in plain text | Ars Technica</a> &mdash; In September of 2018, an anonymous independent security researcher (who we'll call X) noticed that their power company's website was offering to email—not reset!—lost account passwords to forgetful users. Startled, X fed the online form the utility account number and the last four phone number digits it was asking for. Sure enough, a few minutes later the account password, in plain text, was sitting in X's inbox.</li><li><a title="The LinkedIn Hack: Understanding Why It Was So Easy to Crack the Passwords |" rel="nofollow" href="https://inspiredelearning.com/blog/the-linkedin-hack-understanding-why-it-was-so-easy-to-crack-the-passwords-2/">The LinkedIn Hack: Understanding Why It Was So Easy to Crack the Passwords |</a> &mdash; LinkedIn stated that after the initial 2012 breach, they added enhanced protection, most likely adding the “salt” functionality to their passwords. However, if you have not changed your password since 2012, you do not have the added protection of a salted password hash. You may be asking yourself–what on earth are hashing and salting and how does this all work?</li><li><a title="How Developers got Password Security so Wrong" rel="nofollow" href="https://blog.cloudflare.com/how-developers-got-password-security-so-wrong/">How Developers got Password Security so Wrong</a> &mdash; As time has gone on; developers have continued to store passwords insecurely, and users have continued to set them weakly. Despite this, no viable alternative has been created for password security.</li><li><a title="Adding Salt to Hashing: A Better Way to Store Passwords" rel="nofollow" href="https://auth0.com/blog/adding-salt-to-hashing-a-better-way-to-store-passwords/">Adding Salt to Hashing: A Better Way to Store Passwords</a> &mdash; A salt is added to the hashing process to force their uniqueness, increase their complexity without increasing user requirements, and to mitigate password attacks like rainbow tables.

</li><li><a title="Why Do Developers Get Password Storage Wrong? A Qualitative Usability Study" rel="nofollow" href="https://arxiv.org/abs/1708.08759">Why Do Developers Get Password Storage Wrong? A Qualitative Usability Study</a> &mdash; We were interested in exploring two particular aspects: Firstly, do developers get things wrong because they do not think about security and thus do not include security features (but could if they wanted to)? Or do they write insecure code because the complexity of the task is too great for them? Secondly, a common suggestion to increase security is to offer secure defaults.</li><li><a title="OWASP Password Storage Cheatsheet" rel="nofollow" href="https://github.com/OWASP/CheatSheetSeries/blob/master/cheatsheets/Password_Storage_Cheat_Sheet.md">OWASP Password Storage Cheatsheet</a> &mdash; This article provides guidance on properly storing passwords, secret question responses, and similar credential information.</li><li><a title="Secure Salted Password Hashing - How to do it Properly" rel="nofollow" href="https://crackstation.net/hashing-security.htm">Secure Salted Password Hashing - How to do it Properly</a> &mdash; If you're a web developer, you've probably had to make a user account system. The most important aspect of a user account system is how user passwords are protected. User account databases are hacked frequently, so you absolutely must do something to protect your users' passwords if your website is ever breached. The best way to protect passwords is to employ salted password hashing. This page will explain why it's done the way it is.</li><li><a title="Plain Text Offenders" rel="nofollow" href="http://plaintextoffenders.com/">Plain Text Offenders</a> &mdash; We’re tired of websites abusing our trust and storing our passwords in plain text, exposing us to danger. Here we put websites we believe to be practicing this to shame.</li><li><a title="Cybersecurity 101: Why you need to use a password manager | TechCrunch" rel="nofollow" href="https://techcrunch.com/2018/12/25/cybersecurity-101-guide-password-manager/">Cybersecurity 101: Why you need to use a password manager | TechCrunch</a> &mdash; Think of a password manager like a book of your passwords, locked by a master key that only you know.</li><li><a title="On the Security of Password Managers - Schneier on Security" rel="nofollow" href="https://www.schneier.com/blog/archives/2019/02/on_the_security_1.html">On the Security of Password Managers - Schneier on Security</a> &mdash; There's new research on the security of password managers, specifically 1Password, Dashlane, KeePass, and Lastpass. This work specifically looks at password leakage on the host computer. That is, does the password manager accidentally leave plaintext copies of the password lying around memory?</li><li><a title="LinuxFest Northwest 2019" rel="nofollow" href="https://linuxfestnorthwest.org/conferences/2019">LinuxFest Northwest 2019</a> &mdash; It's the 20th anniversary of LinuxFest Northwest! Come join your favorite Jupiter Broadcasting hosts at the Pacific Northwest's premier Linux event.</li><li><a title="SCALE 17x" rel="nofollow" href="https://www.socallinuxexpo.org/scale/17x">SCALE 17x</a> &mdash; The 17th annual Southern California Linux Expo – will take place on March. 7-10, 2019, at the Pasadena Convention Center. SCaLE 17x expects to host 150 exhibitors this year, along with nearly 130 sessions, tutorials and special events.</li><li><a title="Jupiter Broadcasting Meetups" rel="nofollow" href="https://www.meetup.com/jupiterbroadcasting/">Jupiter Broadcasting Meetups</a> &mdash; The best place to find out when Jupiter Broadcasting has a meetup near you! Also stay tuned for upcoming virtual study groups.</li></ul>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
