@natasha1204 U still have 9 days to solve :) Try.
If you are really into 'learning', it would be better for you to wait until the contest ends and have a look at the editorials. Seeking help for an ongoing contest will undoubtedly increase your rating and motivate you, but it is of no use coz deep inside, you know that those ratings aren't genuine. I hope you get my point. Happy learning!
All of us with considerable experience have gone through that phase where confidence was yet to develop. What strikes me on the subject is that it takes practice. That's what you're proposing, but it sounds like you may be expecting more from your present skills than is practical.
There are various competitive programming challenges, and frankly many of them sound absurd. Even class assignments sometimes seem absurd, where the limitations require, for example, someone to perform addition and subtraction without using the "+" or "-" operators.
The competitive "events" are really designed to challenge people with developed skills, and sometimes without real objective purposes - just something along the lines of bragging rights. It's like challenging someone to jump up and down on one leg while holding the other foot behind their head. I'm not convinced there's all that much purpose, generally, even to a gymnast or dancer.
That said, there's a long tradition in teaching, which I regard one of the highest examples is studying a musical instrument, in which exercises are used to strengthen skill. A classically trained musician will spend many hours through many years performing exercises which have no musical value, and may even sound horrific, but there are well reasoned purposes in doing so.
While that may sound contradictory, it is the "well reasoned purpose" clause which strikes the main difference. Many of the competitive programming challenges have no such purpose.
An historical example of a purpose is the invention of the quicksort algorithm. In 1959, Tony Hoare was tasked with writing a translation tool which required sorting, and especially due to the primitive hardware of that era, the algorithms for sorting of that time were too slow. There were several algorithms for sorting, and it is reasonable to say even then it was likely a fools errand to invent a new one, but Hoare made the attempt.
The result was the Quicksort algorithm. It has been studied for decades since, with some significant improvements (which indicates Hoare's initial work was only a beginning), but the basic underlying design is, to this day, Hoare's initial invention.
Yet, Hoare was already a studied and experienced student.
Recently I had a long exchange on another forum relative to the AVL binary tree algorithms. Someone, like you, studying algorithms and data structures wanted to develop the AVL from descriptive information alone. This is similar to one of these competitive challenges, but it far more mainstream and purposeful. It is unlikely that student would develop an AVL storage library worth publishing and using, but the point was to learn what makes the binary tree do what it does, how to balance it and how to engineer the software that implements it.
The threads trace out, in public view, all of the development challenges and questions. Many dead ends and failed attempts are documented. Some issues were on the very basics of "pointers to pointers" (this was in C, not C++). Other issues were on the basics of how to design recursive algorithms. Still other issues were debates one what should come first, and why, in the flow of the algorithm.
The result worked. It was too simple to be generally usable (it could only sort on integers). At the end of it, however, that student completely understood the AVL binary tree. Any second attempt in the future is now very well informed. During the 6+ pages of postings, including the full source code as it developed, that student learned how to implement a plan, discover that the plan was a dead end, back out, regroup and change directions toward a working implementation. It was much a study of pointers, structures, return values and other basics as it was the engineering of software, and of debugging and testing.
That student was utterly lost for a majority of the exchange. It wasn't until page 4 or 5 of the thread that things even started to work as expected.
So, my advice is that if these competitive challenges seem impossible to crack, then stop trying them. Maybe you'll enjoy reading through various solutions that have been given (which isn't cheating when you're just being curious).
Instead, practice. Build upon your existing skills with more practical goals and pursuits.
So, what would you consider your level of expertise at this point? How long have you studied? What have you built, and what do you find too ambitious to build at this point?
Actually to be honest i love to spend whole day thinking about solution to a given problem i started giving long contest two year before and then in between i left coding for an year and now i have inteviews coming next month so just for this time being i wanted to improve my rating..
@natasha1204 To be honest, ur high rating on some coding platform will not help you in your interviews. Let us suppose you have a rating of 1800+ on codechef. The interviewer will think that you are very good at coding and that you must be familiar will all the basic data structures and algorithms, he will ask you conceptual questions accordingly.
And since your rating might not be actually genuine, you might fail to answer those questions, which leaves a worse impression than looking at an average rating.
A high rating against your name only looks good when you actually earn it by learning and practice.
I've been excoriated (years ago) for proposing the performance oriented nature of writing software. Some thought the notion absurd.
Yet, if you stop writing software for a while your skills fade. It takes a while to get the chops back. Since I was, once, long ago, a classically trained musician, I know this sensation. If you stop for a day, you can feel it in your hands. If you stop for two days, the audience can tell.
A programming challenge that results in two functions which perform some mathematical or logical trick is not much actual practice. Building an application, with potentially hundreds of functions and dozens of classes, does something that does amount to practice.
I was recently tasked with something that appeared twice from two sources. One, a thread (I think here) asked about the fastest way to read millions of numbers from a file. The second source was from a client who required an improvement in an existing product that proved too slow reading data from a file.
The thread was a short lived exchange, but the client's requirement was a job.
Now, I haven't worked with the subject in a while. Oh, I've read data from files, certainly, but what I haven't done in the last several years was to review the old ways, the new ways, and compare them at an engineering level. The machines have changed over the decades (I've been at this 40+ years).
The product originally used the old C level family of fopen/fread/fwrite of file functions (it has a long history). An update, some few years ago, moved the code from a pre-C++03 era into a C++11 style, along with some advancements to the product. The teams had changed hands. The old crew, who started the work in the 90's, were gone. Entirely new crews had come and gone. Maybe 1 of a team of 12 were still on board from 2004. The older code, still functioning, would read data with the fopen/fread/fwrite family, while the newer code used iostreams (various forms).
An expansion on newer features revealed that the new style was much slower. The older code was much faster loading data. This made me curious to study the problem, and it was a client request that I help sort out what direction they should consider. The "new" team was not fond of C like file operations, but their use of iostream was demonstrably slower. Much slower. As in 4 times slower.
Now, from the late 70's through the 90's, I used fopen/fread/fwrite frequently. I knew it well. I still know it, but I had to look up the function signatures just be certain of the exact spelling of the mode flags, and the order of the parameters. All was familiar, but my speed at even using these functions was slow for the first hour or so as I had to reference the documentation. For a short while I felt like a student again.
I set up engineering experiments, fashioning simple tests using the various means of reading and writing data found in the client's product, testing performance. What I found was both expected and surprising. On reasonably modern but not brand new hardware (this side of Haswell with both hard disks and SSD storage), the fopen family could process around 600 Mbytes/sec, but string data pulled from iostreams was limited to about 150 Mbytes/sec under best case conditions (files were already in OS cache, so the disk itself was almost removed from the performance).
This meant that an SSD, sustaining over 300 Mbytes/sec without cache, could fully saturate the CPU loading data with iostreams using f >> s, where f is the stream and s is a std::string. On mobile this means battery drain. On servers this is an electric bill, and time.
After a bit of research, based on my own theories, I knew full well that something changed over the years. This software started when the Pentium was new and hard disks could barely supply 40 Mbytes/sec. Some of the older file systems worked, way down at the metal, as block reading devices. They pulled data in block by block. Since NT, however, Windows used virtual memory mapping at the level of the metal. Layers now exist between application code and the actual OS filesystem. The same is true of *nix (where much of this software runs).
Bottom line, I wrote a memory mapped file stream class family. I used boost::interprocess, which gave me OS independent memory mapping of files, with a few deep digs at file expansion functions (fast ways to expand file size without writing data to the disk).
The design move the metal close to the application. Reading data has no layers between application and OS, because pulling data from or writing data to disk works like copying RAM, and doing that one time - not relaying buffers to buffers to disk as the "official" methods do, even the fopen family.
The result was a means of streaming data on that same hardware at 900+ Mbytes/sec. This meant the 150 MByte/sec limit of iostreams and the 600 MByte/sec limit of the fopen family were both elevated to 900+ Mbytes/sec.
It was just as important that where the source storage could not feed data at those speeds, the CPU usage was down so much that on mobile it means saving battery power and on servers it reduces power costs.
Under the hood, this work resembles, slightly and loosely, the kind of thing I read in programming challenges. The result, however, is that I was exposed to far more practical work, both mentally and professionally, than some programming challenge. I would highly recommend such a project. You have many of the same basic problem solving challenges, while producing something that's not only useful, but superior. It revealed to me that there's a weakness born of historical norms in the libraries that only got worse over time, and they can be fixed with something new.
I've returned to my central project, though - using the PhysX engine built from source as a C++ native library in Unity (which otherwise would use C#), for a product.
Have you looked at projecteuler.net? They offer interesting programming problems that generally increase in difficulty. Once you solve a problem, you get access to a discussion board that shows how others solved it. Often you'll learn something interesting by looking at how other people solved it. In really extreme cases, someone figures a way to solve it easily with paper and pencil!
I'd stay away from codechef. To me, those problems are really contrived, but maybe that's just me.
I'd stay away from codechef. To me, those problems are really contrived, but maybe that's just me.
I second that. The problems are often in broken English and so poorly worded that you have to unravel what the question is even asking by their examples rather than their words. You can do it from there, but a properly phrased problem will save a lot of time and energy. And most of them are more about integer math tricks than coding, and a fair number of them seem to want big-int libraries (not 100% sure, but seems that way).
If you want to learn to code better, this isn't a good place.
If you want to learn to think better at a small subset of math problems, it may be a good place for you.