<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Typal Academy: Podcast]]></title><description><![CDATA[test]]></description><link>https://newsletter.typal.academy/s/podcast</link><generator>Substack</generator><lastBuildDate>Mon, 06 Apr 2026 03:05:12 GMT</lastBuildDate><atom:link href="https://newsletter.typal.academy/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Typal Academy]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[typalacademy@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[typalacademy@substack.com]]></itunes:email><itunes:name><![CDATA[Howard Heaton]]></itunes:name></itunes:owner><itunes:author><![CDATA[Howard Heaton]]></itunes:author><googleplay:owner><![CDATA[typalacademy@substack.com]]></googleplay:owner><googleplay:email><![CDATA[typalacademy@substack.com]]></googleplay:email><googleplay:author><![CDATA[Howard Heaton]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[#2 — Deanna Needell]]></title><description><![CDATA[Compressed Sensing, Data Science and Machine Learning]]></description><link>https://newsletter.typal.academy/p/2-deanna-needell</link><guid isPermaLink="false">https://newsletter.typal.academy/p/2-deanna-needell</guid><dc:creator><![CDATA[Howard Heaton]]></dc:creator><pubDate>Mon, 29 Dec 2025 13:03:10 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/182033113/ae474375210728cedb49523815c59979.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><a href="https://www.math.ucla.edu/~deanna/">Deanna Needell</a> is a Professor of Mathematics at University of California, Los Angeles (UCLA) and a leading researcher in compressed sensing, numerical linear algebra, data science, and machine learning. Her work has shaped modern sparse recovery and randomized iterative algorithms, and she is widely known for co-developing CoSaMP, a cornerstone method in compressed sensing. More broadly, her research connects numerical linear algebra and optimization with machine learning.</p><p>Deanna&#8217;s research excellence has been recognized with several honors, including the IMA Prize in Mathematics and its Applications, an NSF CAREER Award, a Sloan Research Fellowship, and election as a Fellow of the American Mathematical Society and a Fellow of SIAM. Beyond theory, Deanna has applied mathematical tools to real-world problems in areas such as imaging, public health, and legal analytics, including work on Lyme disease data and collaborations with organizations like the California Innocence Project. She serves as the Executive Director for the Institute for Digital Research and Education and the Dunn Family Endowed Chair in Data Theory, and is deeply committed to mentorship, inclusiveness, and building bridges between mathematics and society.</p><p>Listen via your favorite app:</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://open.spotify.com/show/76AjA4YVwkEuRZgtyOD7w2&quot;,&quot;text&quot;:&quot;Spotify&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://open.spotify.com/show/76AjA4YVwkEuRZgtyOD7w2"><span>Spotify</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://podcasts.apple.com/us/podcast/numerical-optimization/id1779853226&quot;,&quot;text&quot;:&quot;Apple Podcasts&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://podcasts.apple.com/us/podcast/numerical-optimization/id1779853226"><span>Apple Podcasts</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.youtube.com/playlist?list=PL8ZktGnYsFQF0pcysEh2AelIfo7M3vMxH&quot;,&quot;text&quot;:&quot;YouTube&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.youtube.com/playlist?list=PL8ZktGnYsFQF0pcysEh2AelIfo7M3vMxH"><span>YouTube</span></a></p>]]></content:encoded></item><item><title><![CDATA[#1 — Stanley Osher]]></title><description><![CDATA[Partial Differential Equations and Optimization]]></description><link>https://newsletter.typal.academy/p/0001-stanley-osher</link><guid isPermaLink="false">https://newsletter.typal.academy/p/0001-stanley-osher</guid><dc:creator><![CDATA[Howard Heaton]]></dc:creator><pubDate>Mon, 25 Nov 2024 14:02:19 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/152008392/42e2b780ee13d77cf90470dbb137897f.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Stanley Osher is a longtime UCLA Professor of Mathematics whose work has earned some of the field&#8217;s highest honors, including the Carl Friedrich Gauss Prize (2014), awarded at the International Congress of Mathematicians for major contributions to applied mathematics. He has also been elected to the U.S. National Academy of Sciences (2005), the American Academy of Arts and Sciences (2009), and the National Academy of Engineering (2018), and has received additional major recognitions such as the ICIAM Pioneer Prize (2003) and the SIAM Kleinman Prize (2005).</p><p>Listen via your favorite app:</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://open.spotify.com/show/76AjA4YVwkEuRZgtyOD7w2&quot;,&quot;text&quot;:&quot;Spotify&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://open.spotify.com/show/76AjA4YVwkEuRZgtyOD7w2"><span>Spotify</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://podcasts.apple.com/us/podcast/numerical-optimization/id1779853226&quot;,&quot;text&quot;:&quot;Apple Podcasts&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://podcasts.apple.com/us/podcast/numerical-optimization/id1779853226"><span>Apple Podcasts</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.youtube.com/playlist?list=PL8ZktGnYsFQF0pcysEh2AelIfo7M3vMxH&quot;,&quot;text&quot;:&quot;YouTube&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.youtube.com/playlist?list=PL8ZktGnYsFQF0pcysEh2AelIfo7M3vMxH"><span>YouTube</span></a></p><p></p><p>Show Notes:</p><ul><li><p>Here is the original <a href="https://members.cbio.mines-paristech.fr/~jvert/svn/bibli/local/Rudin1992Nonlinear.pdf">paper</a> on total variation for denoising.</p></li><li><p>Here is a <a href="https://www.youtube.com/watch?v=bRSpJcPYfLI">talk</a> from 2003 where Stan describes and shows images from the <a href="https://en.wikipedia.org/wiki/Attack_on_Reginald_Denny">attack on the truck driver Reginald Denny</a> during the riots in LA (skip to 11:00 for the story).</p></li><li><p>Here is the <a href="https://ntrs.nasa.gov/api/citations/19880001113/downloads/19880001113.pdf">paper</a> on the level set method.</p></li><li><p>The company Stan cofounded, Luminescent Technologies, Inc, used the level set method for inverse lithography technology.</p></li></ul><ul><li><p>Here is a <a href="https://arxiv.org/pdf/math/0409186">paper</a> by Candes, Romberg and Tao on compressed sensing, providing rigorous theory for use of the L1 norm.</p></li><li><p>An example of "thinking continuously rather than discretely" is the analysis of Su, Boyd, and Candes in providing a short and simple proof for Nesterov acceleration in the continuous setting via a continuous ODE (see Theorem 3 in this <a href="https://arxiv.org/pdf/1503.01243">&#8288;paper&#8288;</a>).</p></li></ul>]]></content:encoded></item><item><title><![CDATA[Welcome to Numerical Optimization]]></title><description><![CDATA[Podcast Trailer]]></description><link>https://newsletter.typal.academy/p/welcome-to-numerical-optimization</link><guid isPermaLink="false">https://newsletter.typal.academy/p/welcome-to-numerical-optimization</guid><dc:creator><![CDATA[Howard Heaton]]></dc:creator><pubDate>Fri, 15 Nov 2024 18:33:50 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/151973044/5c62b4925729db66a7fcbe35ce77ab3e.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Our mission is to inspire the development of new math research aimed at solving real-world problems. We do this by sharing fun stories behind math formulas and the places they show up.</p>]]></content:encoded></item></channel></rss>