InLevel Up CodingbyLan ChuThe best RAG technique yet? Anthropic’s Contextual Retrieval and Hybrid SearchHow combining contextual BM25 with Contextual Embeddings can massively improve your RAG system.Oct 1, 202422Oct 1, 202422
InArtificial Intelligence in Plain EnglishbyEric RiscoThe End of Retrieval Augmented Generation? Emerging Architectures Signal a ShiftDec 25, 202359Dec 25, 202359
Mike DillingerKnowledge Graphs enable LLMs to really understandLLMs vs human understandingAug 9, 20237Aug 9, 20237
Jesus RodriguezMeet LMQL: An Open Source Query Language for LLMsDeveloped by ETH Zurich, the language explores new paradigms for LLM programming.Jun 14, 20233Jun 14, 20233
InTDS ArchivebyAparna DhinakaranApplying Large Language Models to Tabular Data to Identify DriftCan LLMs reduce the effort involved in anomaly detection, sidestepping the need for parameterization or dedicated model training?Apr 25, 20235Apr 25, 20235
InTDS ArchivebyGuy DarSpeaking Probes: Self-Interpreting Models?Can language models aid in their interpretation?Jan 16, 2023Jan 16, 2023
Peter IzsakIs Positional Encoding Required In All Language Models?A joint study shows that causal Language Models without positional encoding perform similarly to standard models with positional encoding.Dec 13, 2022Dec 13, 2022
CerebriumSetFit outperforms GPT-3 while being 1600x smallerEveryone is very familiar with the current hype around Large Language Models (LLM) such as GPT-3 and Image Generation models such as DALL-E…Oct 24, 2022Oct 24, 2022