This isn't the 1990's where you can simply fetch a static page and easily extract the information you want.
Also, scraping a commercial site like
https://www.boursorama.com/, where you have to register and log in to get all the goodies is almost certainly against their terms of service. If they have behaviour analysis running, they might notice your requests don't match what a normal human browser would do (like requesting multiple pages with zero delay).
Like I said, use the debug console of your browser to figure out what all the transactions really look like under the hood. It ISN'T a simple GET request.
Low level web programming is a PITA.
It you absolutely must use C++, at least use a decent library like
https://curl.haxx.se/libcurl/ (there are C++ wrappers for it, if that's your bag).
Even then, it's still a PITA.
When I need to scrape things, I use python with the "beautiful soup" package.
But if your site actively uses browser side javascript to fetch and decode data, then you might need to use
https://www.selenium.dev/ to let the browser do all the heavy lifting, before you can extract the results.