A Byte of Coding Issue 377

A Byte of Coding Issue 377

A Byte of Coding

Hey-yo,

Happy April’s fools. I was going to come up with some elaborate prank, but it appears that the prompt to my AI has been explicitly prohibited from doing so.

Anyway, here’s the issue.

Made possible through generous sponsorship by:

Published: 29 March 2024

Tags: math, jax, python

Mat Kelcey explores using kalman filters to predict the position of an object with a 2D trajectory.

Some highlights:

  • uses jax (“JAX is NumPy on the CPU, GPU, and TPU, with great automatic differentiation for high-performance machine learning research”)

  • kalman filters are used to predict the next state of dynamic systems

  • starts with a numpy implementation of the filter that is then ported to jax

Published: 29 March 2024

Tags: infosec, linux, ssh

Andres Freund discovers a backdoor vulnerability in the xz linux package. This is the original email he sent with the discovery. A higher-level overview is available here. An overview of affected systems / how you can check your own system is available here.

Some highlights:

  • THIS IS A BIG DEAL AND YOU SHOULD DEFINITELY READ THIS IF YOU’RE RUNNING LINUX ANYWHERE

  • the vulnerability allows the malicious actor to get admin level access through SSH

  • very impressive that this was caught and huge kudos to Andres Freund

Published: 25 March 2024

Tags: sponsored, auth

WorkOS published an expansive overview of user management.

Some highlights:

  • covers important topics for building an enterprise-ready, resilient B2B authentication system

  • covers 101, 201, and 301 topic levels for user management

  • presents an easy alternative to implementing user management

Published: 31 March 2024

Tags: machine learning, ai

Sebastian Raschka goes over “a paper that discusses strategies for the continued pretraining of LLMs, followed by a discussion of reward modeling used in reinforcement learning with human feedback (a popular LLM alignment method), along with a new benchmark”.

Some highlights:

  • “Continued pretraining for LLMs is an important topic because it allows us to update existing LLMs, for instance, ensuring that these models remain up-to-date with the latest information and trends”

  • “Reward modeling is important because it allows us to align LLMs more closely with human preferences and, to some extent, helps with safety”

  • Reward modeling “also provides a mechanism for learning and adapting LLMs to complex tasks by providing instruction-output examples where explicit programming of correct behavior is challenging or impractical”

Thanks for your Support!

Big thanks to all of the Patreon supports and company sponsors. If you want to support the newsletter you can checkout the Patreon page. It's not necessary, but it lets me know that I'm doing a good job and that you're finding value in the content.