The biggest hurdle would be cracking the hardware/secure-boot. I don't think it's been done. Google is your friend.
ramielrowe
My impression is that you are coming in completely fresh on all of this and you were expecting everyone else to look at your project and tell you what needs to happen to fix it. That is not learning, that is other people telling you what to do. Nor the realities of the internet. We aren't your teachers and what you need is instruction. Not minor feedback, not suggestions, actual instruction. I have a few suggestions if you are actually serious:
- Find a free python course and actually take it. Learn the basics of programming. You cannot judge the code the AI produces if you don't even understand the basics.
- Next, read the book The Clean Coder by Robert Cecil Martin. There you will actually learn the techniques for good professional coding.
- After that read, Making Things Happen by Scott Berkun. I've done a few private personal Claude projects and I run them like a combo Project Manager/Engineering Manager/Staff Engineer.
There are entire college degrees on these topics. And I'm not saying you have to go to college. But, If you're not even willing to read a couple books, then you don't really want to learn.
An issue with your statement "know what you’re doing by doing it" is that without an actually educated teacher to provide trustworthy feedback, you are going to struggle the learn from your mistakes. The LLMs can only provide so much, and they will lie out their ass to you. Unless explicitly prompted to provide critical feedback, they will find any way to provide positive feedback even to your actual detriment. They will happily skirt their sandboxes, and fight your every attempt to make them actually safe.
At a quick glance, nothing in the project indicates that you are not an expert and that an AI Agent provided the code. The quality of the code is also quite poor, even by Claude standards. I'm actually kinda mind blown you got it to built this without any tests... Something we've recently been talking about at my job in terms of AI agents is "cognitive debt" that is incurred in the project. LLMs are fundamentally a statistical next-word generator. If they are given something of poor quality, they will tend to produce more and more poor quality work. And without intervention, it just snowballs.
I'll never tell someone to stop trying to learn. But, your hubris is going to negatively impact your learning outcomes. And to be clear, YOU are not writing the code and the code is what runs on the server and people interact with. What you are doing is using an AI Agent. If you want to get feedback on that, then be honest about it.
EDIT: removing this comment because I don't think you will use this feedback responsibly
I'm sure that's totally going to help OCI's reliability. And Ashburn won't die for 3 straight days again, like it did this week.
If the container you're hosting has a http web service on say port 8080, then you'd want to curl something at http://localhost:8080/. The particular URL/path you hit will depend on the app. If the app is particularly cloud-y, it might even have a specific endpoint for health checking by a container platform. If you share the name of the app I might be able to point you in the right direction.
Wat? Why are people health checking their containers by curl'ing example.com and not the service actually running in the container? Did they not understand that they're supposed to change the curl URL to point at their actual service?
This is such a dumb concept and likely exists entirely to wow investors. A single AI server with 8 GPUs produces something like 7000 watts of heat, if not more. And likewise, will require at least that much power. Sure, solar is "free" once you've got the panels in space. The real killer though is dissipating all that heat. Obviously, there's no atmosphere in space to transfer heat to. Your only option is purely IR radiation which is significantly less efficient. To put this in context, the ISS's heat dissipation system can dissipate about 14,000 watts. Ignoring all of the other infra that goes into a data-center, that would be two servers. You add all this up, the mass of the supporting infrastructure would far out-weigh the actual servers. And the economies of satellites and rocket launches comes down to mass.
Why are AI agents on the org chart? That's odd and sketchy. Seems like it could be some sort of fraud to pad numbers.
Yea, after reading the article, this is an overall of the electronic application process that needs to happen before entry. And it'll include not just social media handles, but also email addresses. Seems reasonably easy for a "bad guy" to skirt.
Honestly, I suspect this is a sneaky way to get CBP access to what ever data sharing shit the social media companies have with the rest of the spooks. Simply by attempting to enter the US someone "agrees" to an automatic search of their social data.
I find it somewhat ironic that this seems to have been posted by a literal blogspam bot.