You had me cracking up at
parses HTML with regex
I started thinking about performance gains.
Local data hoarder who looks down on calls outside the network as obscenities. (Entire collection scraped more aggressively than tech bros training an AI model)parses HTML with regex
shudders
You can’t parse [X]HTML with regex. Because HTML can’t be parsed by regex. Regex is not a tool that can be used to correctly parse HTML. As I have answered in HTML-and-regex questions here so many times before, the use of regex will not allow you to consume HTML. Regular expressions are a tool that is insufficiently sophisticated to understand the constructs employed by HTML. HTML is not a regular language and hence cannot be parsed by regular expressions. Regex queries are not equipped to break down HTML into its meaningful parts. so many times but it is not getting to me. Even enhanced irregular regular expressions as used by Perl are not up to the task of parsing HTML. You will never make me crack. HTML is a language of sufficient complexity that it cannot be parsed by regular expressions. Even Jon Skeet cannot parse HTML using regular expressions. Every time you attempt to parse HTML with regular expressions, the unholy child weeps the blood of virgins, and Russian hackers pwn your webapp. Parsing HTML with regex summons tainted souls into the realm of the living. HTML and regex go together like love, marriage, and ritual infanticide. The <center> cannot hold it is too late. The force of regex and HTML together in the same conceptual space will destroy your mind like so much watery putty. If you parse HTML with regex you are giving in to Them and their blasphemous ways which doom us all to inhuman toil for the One whose Name cannot be expressed in the Basic Multilingual Plane, he comes. HTML-plus-regexp will liquify the nerves of the sentient whilst you observe, your psyche withering in the onslaught of horror. Rege̿̔̉x-based HTML parsers are the cancer that is killing StackOverflow it is too late it is too late we cannot be saved the transgression of a chi͡ld ensures regex will consume all living tissue (except for HTML which it cannot, as previously prophesied) dear lord help us how can anyone survive this scourge using regex to parse HTML has doomed humanity to an eternity of dread torture and security holes using regex as a tool to process HTML establishes a breach between this world and the dread realm of c͒ͪo͛ͫrrupt entities (like SGML entities, but more corrupt) a mere glimpse of the world of regex parsers for HTML will instantly transport a programmer’s consciousness into a world of ceaseless screaming, he comes~~, the pestilent sl
ithy regex-infection will devour your HTML parser, application and existence for all time like Visual Basic only worse he comes he comes do not fight he com̡e̶s, ̕h̵is un̨ho͞ly radiańcé destro҉ying all enli̍̈́̂̈́ghtenment, HTML tags lea͠ki̧n͘g fr̶ǫm ̡yo͟ur eye͢s̸ ̛l̕ik͏e liquid pain, the song of re̸gular expression parsingwill extinguish the voices of mortal man from the sphere I can see it can you see ̲͚̖͔̙î̩́t̲͎̩̱͔́̋̀ it is beautiful the fes he co~~inal snuffing of the lies of Man ALL IS LOŚ͖̩͇̗̪̏̈́T A*LL IS LOST the pon̷y he come*s he c̶̮ommes the ichor permeates all MY FACE MY FACE ᵒh god no NO NOO̼*OO NΘ stop the an*̶͑̾̾̅ͫ͏̙̤g͇̫͛͆̾ͫ̑͆l͖͉̗̩̳̟̍ͫͥͨ*e̠̅s͎a̧͈͖r̽̾̈́͒͑enot rè̑ͧ̌aͨl̘̝̙̃ͤ͂̾̆ ZA̡͊͠͝LGΌ ISͮ̂҉̯͈͕̹̘̱ TO͇̹̺ͅƝ̴ȳ̳ TH̘Ë͖́̉ ͠P̯͍̭O̚N̐Y̡ H̸̡̪̯ͨ͊̽̅̾̎Ȩ̬̩̾͛ͪ̈́̀́͘ ̶̧̨̱̹̭̯ͧ̾ͬC̷̙̲̝͖ͭ̏ͥͮ͟Oͮ͏̮̪̝͍M̲̖͊̒ͪͩͬ̚̚͜Ȇ̴̟̟͙̞ͩ͌͝S̨̥̫͎̭ͯ̿̔̀ͅHave you tried using an XML parser instead?
xmllint --root + regex = chefs kiss
I got a bot on lemmy that scrapes espn for sports/football updates using regex to retrieve the JSON that is embedded in the html file, it works perfectly so far 🤷♂️
🤢
we’re in web 3.0 now, apis and data access are a thing of the past. so scraping it is!
Guess who recently asked a company if he could get access to the API they use to load stuff in their frontend from their backend and got told “Nope and btw scraping is against our TOS”?
Well, if you won’t give it to me the info that you provide anyway the easy way, I can still take it the hard way. 🤷♂️
Maybe you should just try being lucky. I found a critical security vulnerability while working on my scraping project. I told them, they paid me and gave me written permission to scrape.
You are braver than I am because here in Germany usually people get sued for reporting security vulnerabilities.
tf? They should offer you a job if anything.
That is if you’d live in a place with an open attitude toward new technologies.
But the technology is already there in place, and you get sued if you point out security flaws in it? Crazy.
Yes, because any circumvention of any form of security, be it as useless as a hardcoded default password, is considered a crime in German law. So even the discovery of a security flaw puts you with one foot in jail, because technically you did something you are not supposed to.
I know a guy who did exactly that and got sued. The security failure he reported even was a Straftatbestand committed by the company and so he won the process. German companies really love shooting themselves in the foot.
Over here, not just sued, but sued for extortion because they had the audacity to ask for bug bounty. Ok then, if I ever find a security hole that exposes sensitive data, filing a gdpr report it is
You scrape 'em boy, you scrape 'em good!
i mean i haven’t signed anything…
“by using this site you agree to…”
I’m not using your site. And I agree to nothing. Now, go GET for me.
For today’s lucky 5000:
I don’t get it
That’s just one of the many things you can do at Zombocom
The infinite is possible at Zombo com
Make sure it’s not muted. The audio is the vehicle for this journey.
My sound is on, but I hear nothing ¯\(°_o)/¯
That’s just one of the many things you can do at Zombocom
What exactly are y’all scraping?
Zombocom
I scrape my own bank and financial aggregator to have a self hosted financial tool. I scrape my health insurance to pull in data to track for my HSA. I scrape Strava to build my own health reports.
How so? Shouldn’t that information be behind quite a few layers of security?
I developed my own scraping system using browser automation frameworks. I also developed a secure storage mechanism to keep my data protected.
Yeah there is some security, but ultimately if they expose it to me via a username and password, I can use that same information to scrape it. Its helpful that I know my own credentials and have access to all 2FA mechanisms and am not brute forcing lots of logins so it looks normal.
Some providers protect it their websites with bot detection systems which are hard to bypass, but I’ve closed accounts with places that made it too difficult to do the analysis I need to do.
I only did one scraping script that took the top 25 hotels from a booking.com Web page with their prices. They used to do those manually
postmarket OS tables because I was looking forna device that was unofficially supported but somehow not in their damn table
Are there benefits to websites thinking your agent is a phone? I assumed phones just came with additional restrictions such as meta tags in the stylesheet, not like stylesheets matter at all to a scraper lol
Hey, you guys got any cool tips for website scraping?
I recommend Zombocom
Selenium is your fren
Selenium looks at the same time the most overkill and the most compatible option. Really cool! Thanks!
Beautiful Soup (python library, bs4) is also fren
what do you want to scrape.













