InLevel Up CodingbyLan ChuThe best RAG technique yet? Anthropic’s Contextual Retrieval and Hybrid SearchHow combining contextual BM25 with Contextual Embeddings can massively improve your RAG system.Oct 1, 20241.8K22Oct 1, 20241.8K22
InArtificial Intelligence in Plain EnglishbyEric RiscoThe End of Retrieval Augmented Generation? Emerging Architectures Signal a ShiftDec 25, 20231.8K59Dec 25, 20231.8K59
Mike DillingerKnowledge Graphs enable LLMs to really understandLLMs vs human understandingAug 9, 20233527Aug 9, 20233527
Jesus RodriguezMeet LMQL: An Open Source Query Language for LLMsDeveloped by ETH Zurich, the language explores new paradigms for LLM programming.Jun 14, 20232323Jun 14, 20232323
InTDS ArchivebyAparna DhinakaranApplying Large Language Models to Tabular Data to Identify DriftCan LLMs reduce the effort involved in anomaly detection, sidestepping the need for parameterization or dedicated model training?Apr 25, 20233935Apr 25, 20233935
InTDS ArchivebyGuy DarSpeaking Probes: Self-Interpreting Models?Can language models aid in their interpretation?Jan 16, 2023107Jan 16, 2023107
Peter IzsakIs Positional Encoding Required In All Language Models?A joint study shows that causal Language Models without positional encoding perform similarly to standard models with positional encoding.Dec 13, 202250Dec 13, 202250
CerebriumSetFit outperforms GPT-3 while being 1600x smallerEveryone is very familiar with the current hype around Large Language Models (LLM) such as GPT-3 and Image Generation models such as DALL-E…Oct 24, 2022374Oct 24, 2022374