<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>tuba</title><link>https://0xtuba.github.io/</link><description>Recent content on tuba</description><generator>Hugo -- 0.128.0</generator><language>en</language><lastBuildDate>Wed, 12 Feb 2025 12:00:00 +0000</lastBuildDate><atom:link href="https://0xtuba.github.io/index.xml" rel="self" type="application/rss+xml"/><item><title>ASI Superforecaster</title><link>https://0xtuba.github.io/posts/agi-investor/</link><pubDate>Wed, 12 Feb 2025 12:00:00 +0000</pubDate><guid>https://0xtuba.github.io/posts/agi-investor/</guid><description>Introduction An ASI superforecaster refers to a hypothetical artificial general intelligence that can predict real-world events with superhuman accuracy across domains. Artificial Super Intelligence (ASI) is defined as an AI system that matches or exceeds human cognitive capabilities across a wide range of tasks (Artificial general intelligence - Wikipedia). In this context, an ASI superforecaster would possess broad knowledge and reasoning skills enabling it to forecast events in politics, economics, science, and other fields better than any human.</description></item><item><title>Exploring CLIP Latent Space</title><link>https://0xtuba.github.io/posts/clip-latent-space/</link><pubDate>Mon, 09 Sep 2024 12:00:00 +0000</pubDate><guid>https://0xtuba.github.io/posts/clip-latent-space/</guid><description>One interesting area of research today is the open problem of how to control image models better. For example, it is difficult to tweak an output of an image model through prompting – a small change in prompt usually ends up changing the entire image.
New techniques such as ControlNet and IP-Adapter are ways that users can maintain the structure of an image while changing small aspects of it, for example maintaining a person&amp;rsquo;s likeness in an image while changing the hair colour.</description></item><item><title>Understanding Fine Tuning</title><link>https://0xtuba.github.io/posts/understanding-fine-tuning/</link><pubDate>Fri, 26 Jul 2024 12:00:00 +0000</pubDate><guid>https://0xtuba.github.io/posts/understanding-fine-tuning/</guid><description>Experiments in Fine Tuning The high-level idea of fine tuning is fairly straightforward: re-training a model based on some set of new data should be able to give the model new knowledge or skills. Many services offer “out-of-the-box” fine-tuning on both open and closed models, and they make the process extremely simple: you submit a dataset, they train the model and host it. However, how this works under the hood was a mystery for me.</description></item></channel></rss>