PinnedMarko VidrihinGoPenAIMixture of Experts (MoE) in AI Models ExplainedThe Mixture of Experts (MoE) is offering a unique approach to efficiently scaling models while maintaining, or even improving, their…Dec 12, 20232Dec 12, 20232
PinnedMarko VidrihDo You Need a Zero Knowledge Proof?Where exactly do zero-knowledge proofs fit in?Feb 281Feb 281
PinnedMarko VidrihCrypto Trends Report 2024A full report of crypto trends in 2024 by Marko VidrihMar 14Mar 14
PinnedMarko VidrihinGoPenAIMulti-Agent LLM: Harnessing Large Language Models for the Generation of Artificial ExpertsWhat Happens When Multiple AIs Talk to Each Other?Oct 5, 2023Oct 5, 2023
Marko VidrihinGoPenAIAI Race Heats Up: xAI’s Supercluster and Meta’s Llama 3.1 Shake Up the IndustryIn a series of groundbreaking announcements, the artificial intelligence landscape is witnessing rapid advancements that promise to reshape…Jul 23Jul 23
Marko VidrihMistral Fine-Tuning API: Here’s What You Need To KnowMistral API for fine-tuning Mistral 7B and Mistral Small modelsJun 6Jun 6
Marko VidrihMamba-2 is Out: Can it replace Transformers?Mamba-2: A new state space model architecture that outperforms Mamba and Transformer++Jun 6Jun 6
Marko VidrihThe Easiest Way to Run Llama 3 LocallyDownload, install, and type one command in the terminal to start using Llama 3 on your laptop.May 17May 17
Marko VidrihNew Dawn of Web3 Security: Veritas Protocol ReviewThe vast expanse of Web3, where innovation thrives alongside vulnerability, demands a stalwart protector. Veritas Protocol, not just…Apr 25Apr 25