Brand.devv2
Overview
Brand.dev is a versatile branding API designed for customizing online experiences. This tool enables users to extract logos, primary colors, descriptions, classifications, and more from any domain in a swift and straightforward manner.
It is especially useful for enhancing B2B software experiences. With a single API call, users can pull a large amount of branding information from a domain, which can be of significant value in creating a more immersive and tailored user experience.
In addition to this, the Brand.dev API has built-in functions for enriching company and transaction data, which can be leveraged to improve user experience (UX) by enriching company line item data.
It is likewise ideal for enhancing transactions, CRM data, emails, and more. Furthermore, the API comes with functionalities that allow for pre-filling and personalizing the onboarding process for customers who sign up with their corporate emails.
The extracted domain data can be used to bolster onboarding flows, thereby potentially increasing onboarding conversion rates.
Releases
Introduced a fairer billing model: customers are charged only for successful 200 responses, not bad domains or invalid inputs.
Launched powerful new APIs: Screenshot API, Styleguide API, AI Query API, and Prefetch API to improve brand extraction, design-system retrieval, automated screenshotting, and pre-warming performance.
Delivered significant speed boosts across infrastructure—20% faster retrieve calls, 40–60% faster screenshot/styleguide pipelines, 30% lower latency overall, and cold-hit retrieval reduced to ~9s.
Added new parameters & controls like ticker_exchange, page, prioritize, maxSpeed, and timeoutMS to give developers more precision and customization.
Improved data quality with updates such as better wordmark extraction, reduced false positives, enhanced NSFW detection, EIC industry classification, and accurate link detection (careers, blog, terms, pricing, etc.).
Expanded platform ecosystem with GitHub login, Zapier 2.0, new Python, Ruby, and TypeScript SDKs, and contact/industry previews in the API playground.
Launched new Developer Portal, integrated it into the main site, added invoice management, plan upgrades/downgrades, and a built-in feedback widget.
Strengthened reliability with better error handling, validation, DDoS protection, API history fixes, and a new API History dashboard for usage analytics.
Top alternatives
-
Sha Ok🙏 26 karmaNov 30, 2024@FetchFoxWorking great as of 30/11/2024. Let me tell you why I'm so happy about this, because I was desperate: - Firecrawl.dev? Inconsistent API documentation. Apify scrapers? Couldn't work on my target URL. - I spent $150 over 4-5 OTHER tools before finding this one, which is free to use locally right now. Wow. I'm dumb. - My target URL was a broken site, with 500 javascript errors, content violations, bad cookies, etc. - Not a single scraper worked that I tried, except for this one. So yeah, I'd say I'm pretty happy
-
Extract data instantly: Grab text, images, emails & links from any websiteOpen
-
Very expensive for most common pages, no way you pay that
-
I've been using Octoparse for a few months now, and I must say, it has exceeded my expectations. As someone with limited coding knowledge, I was initially hesitant to try web scraping. However, Octoparse made the process incredibly easy with its intuitive interface and powerful features. I've been able to extract data from multiple websites without any hassle, and the automation capabilities have saved me a ton of time. The customer support team has also been incredibly helpful whenever I've had questions or encountered issues. Overall, I highly recommend Octoparse to anyone looking for a reliable and user-friendly web scraping tool. Another notable aspect of Octoparse is its versatility, catering to both beginners and advanced users alike. Its pre-built templates and workflows streamline the scraping process, while advanced users can leverage its customization options for more complex scraping tasks. Additionally, Octoparse offers scheduling and automation features, allowing users to automate repetitive scraping tasks and save time.
-
dont have option to scrap in bulk, then is useless
-
AVOID AT ALL COSTS. Amateur hour if I ever saw it: they change their entire API endpoints on a whim on the SAME VERSION, breaking all of the work you've done to integrate with them (thousands of $), then tell you you're not an $5k / month Enterprise customer and they don't really support their self serve product so...tough luck buddy. I was one of their first customers and have been using them for over a year and a half only to wake up one day to customer complaints because our integration had broken and be told they basically don't care about. After I reached out to their team, it became clear that they had simply changed their entire API from one day to the next on the same version, with no notice and a documentation that doesn't reflect the changes they made. Literally everything is broken in one way or another: the UI fails to save changes every other try, the API has three different versions in use at the same time, some endpoints use v3 others use v4, similar payload responses don't even reply with the same structure for the data, the documentation is completely out of sync with the actual endpoints, the webhooks trigger 3 times in a row and on and on and on... Somehow they have several engineers, yet no-one there actually knows how to fix your problem. I've been a customer for over a year and a half and after breaking my entire integration, they just told me to go take my business elsewhere because they couldn't support their OWN breaking changes! Oh and by the way, it took a week of back and forth and trying to fix my integration and work with their broken documentation to tell me this. Total insanity. I couldn't ever imagine being an Enterprise customer with these people, their whole product is built on a shoestring and they have absolutely no respect whatsoever for their customers.
