A Byte of Coding Issue 351

A Byte of Coding Issue 351

A Byte of Coding

Hey-yo,

Happy Valentine’s day! Lots of AI stuff today. Just happened to be what was interesting.

Anyway, here’s the issue.

Unblocked provides development teams helpful and accurate answers to questions about their codebase. It consolidates fragmented information and surfaces relevant knowledge by complementing source code with the discussions from GitHub, Slack, Notion, JIRA (and more). Now, teams spend less time digging for context and more time building great software.

Published: 12 February 2024

Tags: c++, optimization

Denis Bakhvalov wrote a five part series on “how to collect high-level information about a program’s interaction with memory”.

Some highlights:

  • Memory profiling helps you understand how an application uses memory over time and helps you build the right mental model of a program’s behavior”

  • uses Stockfish as a case study for memory usage

  • analyzes memory foot print with Intel Software Development Emulator tool (SDE)

Published: 13 February 2024

Tags: machine learning, ai

David Tan and Jessie Wang discuss the different aspects of designing an “AI Concierge proof of concept (POC)”, that “provides an interactive, voice-based user experience to assist with common residential service requests”.

Some highlights:

  • leverages AWS services (Transcribe, Bedrock and Polly) to convert human speech into text, process this input through an LLM, and finally transform the generated text response back into speech”

  • “delve into the project's technical architecture, the challenges [they] encountered, and the practices that helped [them] iteratively and rapidly build an LLM-based AI Concierge”

  • “LLM engineering != prompt engineering”

Unblocked provides development teams helpful and accurate answers to questions about their codebase. It consolidates fragmented information and surfaces relevant knowledge by complementing source code with the discussions from GitHub, Slack, Notion, JIRA (and more). Now, teams spend less time digging for context and more time building great software.

Published: 19 January 2024

Tags: machine learning, ai, scientific paper

Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, and Hengshuang Zhao published a paper on “the most capable Monocular Depth Estimation (MDE) foundation models”.

Some highlights:

  • provides better depth estimation on videos and images compared to other available models

  • "[designed] a data engine to collect and automatically annotate large-scale unlabeled data (∼62M), which significantly enlarges the data coverage and thus is able to reduce the generalization error”

  • trained on labelled and non-labelled images

Thanks for your Support!

Big thanks to all of the Patreon supports and company sponsors. If you want to support the newsletter you can checkout the Patreon page. It's not necessary, but it lets me know that I'm doing a good job and that you're finding value in the content.