I found this useful, particularly the categorization of papers into surveys, benchmarks, and breakthroughs. The presentation is only a few days old and so the specific papers listed are representative of the current state of art. As a CS Phd friend of mine once told me when I was struggling to understand an algorithm proof, as a practitioner you really don't need to understand the proof, just the results. I suspect that is true for these papers too.
How To Read AI Research Papers Effectively
https://www.youtube.com/watch?v=K6Wui3mn-uI
At local meetup I mentioned that several years ago Google had released hardware that targeted machine learning. I could not remember the details. As luck would have it, I listened to Jeff Dean's presentation this weekend and he mentioned the Tensor Processing Unit (TPU) for low precision linear algebra. Overall, the presentation is interesting and useful. The majority of it is focused Google's Gemini/Bard, but given who the speaker is this is understandable.
Jeff Dean (Google): Exciting Trends in Machine Learning
https://www.youtube.com/watch?v=oSCRZkSQ1CE