3 comments

  • sippeangelo 6 hours ago
    The biggest latency improvement I saw was switching off OpenAI's API that would have a latency anywhere between 0.3 - 6 seconds(!) for the same two word search embedding...
  • novoreorx 13 hours ago
    Great article! I always feel that the choice of embedding model is quite important, but it's seldom mentioned. Most tutorials about RAG just tell you to use a common model like OpenAI's text embedding, making it seem as though it's okay to use anything else. But even though I'm somewhat aware of this, I lack the knowledge and methods to determine which model is best suited for my scenario. Can you give some suggestions on how to evaluate that? Besides, I'm wondering what you think about some open-source embedding models like embeddinggemma-300m or e5-large.
  • jawnwrap 11 hours ago
    Cool article, but nothing groundbreaking? Obviously if you reduce your dimensionality the storage and latency decreases.. it’s less data