If you tell a computer to go to a point on a webpage, it will jump straight to the correct X and Y axis points. You can get fancier and have it mimic a mouse drag to get there, but it will take the most direct path by default: a perfectly straight line to the destination. You can get even fancier and code in some pseudorandom path activity, but it's nearly impossible to mimic the microscopic randomness that humans have when we move our cursors.
To make matters even more difficult for our robot friends, when you land on a page with an "I'm not a robot" checkbox, it is capturing information from your browser including:
How long it took the page to load
What browser, plugins, and cookies you're using
Your timezone and time
Your screen size and resolution
Your IP address and general location
How many key strokes, clicks, and/or scrolls you've made
The reCAPTCHA machine is using all of this information to determine whether or not you're more likely to be a human or a robot. If it can't tell, it may prompt you to do one of those "Click on the image that has this thing in it" challenges, which is more difficult to train a robot to do.
Robots can’t pass reCAPTCHA because there are way more factors other than clicking a box that they can’t mimic (e.g. keystrokes, microscopic random movements, IP addresses).
🧠 Bonus brain points
What's the difference between CAPTCHA and reCAPTCHA?
A CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is a type of challenge used to determine whether or not a user is a human. It was first invented in 1997 and was usually a display of cryptically designed letters and numbers that humans could decipher but robots couldn't. Eventually, robots were trained on these challenges and could bypass them.
reCAPTCHA is a service acquired and then further developed by Google to achieve the same outcome (determining what users are human) but using far more advanced methods, partially described above.