12 comments

  • letier 6 minutes ago
    The extraction prompt would need some hardening against prompt injection, as far as i can tell.
  • sheept 2 hours ago
    > LLMs return malformed JSON more often than you'd expect, especially with nested arrays and complex schemas. One bad bracket and your pipeline crashes.

    This might be one reason why Claude Code uses XML for tool calling: repeating the tag name in the closing bracket helps it keep track of where it is during inference, so it is less error prone.

    • andrew_zhong 1 hour ago
      Yeah that's a good observation. XML's closing tags give the model structural anchors during generation — it knows where it is in the nesting. JSON doesn't have that, so the deeper the nesting the more likely the model loses track of brackets.

      We see this especially with arrays of objects where each object has optional nested fields. For complex nested objects, the model can get all items well formatted but one with an invalid field of wrong type. That's why we put effort into the repair/recovery/sanitization layer — validate field-by-field and keep what's valid rather than throwing everything out.

    • AbanoubRodolf 1 hour ago
      [dead]
  • Flux159 2 hours ago
    This looks pretty interesting! I haven't used it yet, but looked through the code a bit, it looks like it uses turndown to convert the html to markdown first, then it passes that to the LLM so assuming that's a huge reduction in tokens by preprocessing. Do you have any data on how often this can cause issues? ie tables or other information being lost?

    Then langchain and structured schemas for the output along w/ a specific system prompt for the LLM. Do you know which open source models work best or do you just use gemini in production?

    Also, looking at the docs, Gemini 2.5 flash is getting deprecated by June 17th https://ai.google.dev/gemini-api/docs/deprecations#gemini-2.... (I keep getting emails from Google about it), so might want to update that to Gemini 3 Flash in the examples.

  • plastic041 3 hours ago
    > Avoid detection with built-in anti-bot patches and proxy configuration for reliable web scraping.

    And it doesn't care about robots.txt.

    • andrew_zhong 2 hours ago
      Good point. The anti-bot patches here (via Patchright) are about preventing the browser from being detected as automated — things like CDP leak fixes so Cloudflare doesn't block you mid-session. It's not about bypassing access restrictions.

      Our main use case is retail price monitoring — comparing publicly listed product prices across e-commerce sites, which is pretty standard in the industry. But fair point, we should make that clearer in the README.

      • plastic041 39 minutes ago
        robots.txt is the most basic access restrictions and it doesn't even read it, while faking itself as human[0]. It is about bypassing access restrictions.

        [0]: https://github.com/lightfeed/extractor/blob/d11060269e65459e...

      • zendist 48 minutes ago
        Regardless. You should still respect robots.txt..
        • andrew_zhong 36 minutes ago
          We do in production - also scraping browser providers like BrightData respects robots.txt

          I will add a PR to enforce robots.txt before the actual scraping.

          • plastic041 8 minutes ago
            How can people believe that you are respecting bot detection in production when your software's README says it can "Avoid detection with built-in anti-bot patches"?
      • messe 1 hour ago
        > It's not about bypassing access restrictions.

        Yes. It is. You've just made an arbitrary choice not to define it as such.

        • andrew_zhong 31 minutes ago
          I will add a PR to enforce robots.txt before the actual scraping.
  • dmos62 2 hours ago
    What's your experience with not getting blocked by anti-bot systems? I see you've custom patches for that.
    • andrew_zhong 1 hour ago
      The anti-bot patches here (via Patchright) are about preventing the browser from being detected as automated — fixing CDP leaks, removing automation flags, etc. For sites behind Cloudflare or Datadome, that alone usually isn't enough — you'll need residential proxies and proper browser fingerprints on top. The library supports connecting to remote scraping browsers via WebSocket and proxy configuration for those cases.
  • AirMax98 1 hour ago
    This feels like slop to me.

    It may or may not be, but if you want people to actually use this product I’d suggest improving your documentation and replies here to not look like raw Claude output.

    I also doubt the premise that about malformed JSON. I have never encountered anything like what you are describing with structured outputs.

    • andrew_zhong 2 minutes ago
      In context of e-commerce web extraction, invalid JSON can occur especially in edge cases, for example:

      price: z.number().optional() -> price: “n/a”

      url: z.string().url().nullable() -> url: “not found”

      It can also be one invalid object (e.g. missing required field, truncated input) in an array causing the entire output to fail.

      The unique contribution here is we can recover invalid nullable or optional field, and also remove invalid nested objects in an array.

  • zx8080 2 hours ago
    Robots.txt anyone?
    • andrew_zhong 2 hours ago
      Good point. The anti-bot patches here (via Patchright) are about preventing the browser from being detected as automated — things like CDP leak fixes so Cloudflare doesn't block you mid-session. It's not about bypassing access restrictions.

      Our main use case is retail price monitoring — comparing publicly listed product prices across e-commerce sites, which is pretty standard in the industry. But fair point, we should make that clearer in the README.

  • hikaru_ai 56 minutes ago
    [dead]
  • openclaw01 1 hour ago
    [dead]
  • johnwhitman 2 hours ago
    [dead]
  • Remi_Etien 2 hours ago
    [dead]
  • gautamborad 3 hours ago
    [dead]