JoseRFJunior/TransNAR https://github.com/JoseRFJuniorLLMs/TransNAR https://arxiv.org/html/2406.09308v1 TransNAR hybrid architecture. Similar to Alayrac et al, we interleave existing Transformer layers with gated cross-attention layers which enable information to flow from the NAR to the Transformer. We generate queries from tokens while we obtain keys and values from nodes and edges of the graph. The node and edge embeddings are obtained by running the NAR on the graph version of the reasoning task to be solved. When experimenting with pre-trained Transformers, we initially close the cross-attention gate, in order to fully preserve the language model’s internal knowledge at the beginning of training.
SwapAnything is a new method that allows swapping any object in an image with personalized concepts given by a reference image.
Key points: 1️⃣ It uses pre-trained diffusion models to enable precise and high-fidelity object swapping in images. 2️⃣Targeted variable swapping ensures perfect background preservation while swapping specific areas. 3️⃣SwapAnything achieves good results in single-object, multi-object, partial-object, and cross-domain swapping tasks.
Anthropic introduces "Many-shot Jailbreaking" (MSJ), a new attack on large language models! MSJ exploits long context windows to override safety constraints.
Key Points: * Prompts LLMs with hundreds of examples of harmful behavior formatted as a dialogue * Generates malicious examples using an uninhibited "helpful-only" model * Effective at jailbreaking models like Claude 2.0, GPT-3.5, GPT-4 * Standard alignment techniques provide limited protection against long context attacks
Google DeepMind introduces Gecko a new text embedding! Gecko uses a two-step process that leverages synthetic data generation and reranking.
Keypoints: * Uses an LLM to generate diverse synthetic queries and tasks from web passages * Refines the data by retrieving candidate passages and relabeling positives/negatives using the same LLM * Achieves very good results on the Massive Text Embedding Benchmark, where compact 256D Gecko outperforms 768D models. * 768D Gecko achieves state-of-the-art performance competing with models a lot larger larger.
A new paper titled "Long-Form Factuality in Large Language Models" proposes a new approach to evaluate the long-form factuality of large language models using an AI agent! They introduce SAFE (Search-Augmented Factuality Evaluator) which leverages an LLM to break down responses into individual facts, query Google to verify each fact, and perform multi-step reasoning.
Keypoints: * SAFE (Search-Augmented Factuality Evaluator) is an automated method using an LLM agent to evaluate factuality * It also introduces LongFact, a 2,280 prompt set spanning 38 topics to test open-domain factual knowledge * SAFE achieves a 72% humans agreement while being 20x cheaper. It also wins 76% of the disagreements measured on a small scale experiment where a more thorough human procedure (researchers + full internet search) was used. * Larger models like GPT-4, Claude Opus and Gemini Ultra tend to exhibit better long-form factuality.
A new paper introduces Visual CoT, a new approach that enhances multi-modal large language models with visual chain-of-thought reasoning capabilities. This allows language models to dynamically identify and focus on specific regions within images that are most relevant for answering questions, mimicking human-like efficient visual reasoning.
Keypoints: * Introduces the 373k Visual CoT dataset with bounding box annotations highlighting essential image regions * Proposes a multi-turn pipeline for focusing on relevant visual inputs * Achieves strong results on multi-modal benchmarks
"Follow-Your-Click: Open-domain Regional Image Animation via Short Prompts" is a new framework designed to animate specific regions within an image through user inputs.
Key points: * Enables precise animation of selected image regions with just a user click and a concise motion description. * Achieves promising results for generating localized animations.
Synth^2 is a new approach that leverages large language models and text-to-image generators to create synthetic image-caption data for boosting visual-language model performance.
Key Points: * Overcomes data limitations by generating high-quality synthetic image-caption pairs, reducing reliance on costly human annotations. * Achieves competitive results on image captioning tasks using 40x less paired data than state-of-the-art methods.
A recent paper titled "ShortGPT: Layers in Large Language Models are More Redundant Than You Expect" proposes a simple and effective approach to pruning Large Language Models (LLMs) by removing redundant layers.
Key points: * Discovers significant redundancy across layers in LLMs, with some layers playing a negligible role for the final performance. * Defines a new metric called Block Influence (BI) to quantify the importance of each layer in an LLM. * Removes layers with low BI scores, achieving up to 25% reduction in parameters and computation while maintaining 92% of the LLM's performance.
A recent paper titled "Finetuned Multimodal Language Models Are High-Quality Image-Text Data Filters" proposes using fine-tuned Multimodal Language Models (MLMs) as high-quality filters for image-text data.
Key points: * Defines multiple metrics to assess image-text quality from different perspectives like object details, text quality, and semantic understanding. * Leverages GPT-4 and GPT-4V to construct high-quality instruction data for fine-tuning open-source MLMs as effective data filters. * Fine-tuned MLM filters generate more precise scores, leading to better filtered data and improved performance of pre-trained models on various downstream tasks.
"Multi-LoRA Composition for Image Generation" introduces two new approaches for combining multiple visual elements in text-to-image generation using Low-Rank Adaptations (LoRAs)! 🎨
Key Points: * Proposes two methods - LoRA Switch and LoRA Composite - that activate/combine LoRAs during the denoising process rather than merging weights * LoRA Switch cycles through different LoRAs at each step, while LoRA Composite averages guidance from all LoRAs simultaneously
The "Design2Code: How Far Are We From Automating Front-End Engineering" paper presents a benchmark for multimodal large language models (LLMs) aimed at automating front-end web development by translating webpage designs (screenshots) into code. This task evaluates the models' ability to recreate webpages that are visually and structurally similar to the original designs.
Key Points: * Introduces the Design2Code task and benchmark for converting webpage screenshots into code, aiming to automate front-end web development. * Evaluates multimodal LLMs using comprehensive metrics for visual similarity and element matching. * GPT-4V outperforms other models in terms of visual resemblance and content accuracy, with generated webpages often preferred over the original references.