Just remember that Orson Scott Card is a massive homophobe!
Good books though.
Just remember that Orson Scott Card is a massive homophobe!
Good books though.
Realistically their does need to be some consideration but the medium they travel isn’t air, but the occasional speck of dust, hydrogen atom, and other small stuff. It’s not much but for interstellar travel there are still considerations needed, namely reducing your cross sectional area in the direction of travel. Long and thin gives you less drag since it hits less stuff.
Regardless the airplane looks doesn’t make much sense anyway :)
Actually, space in general is mostly 2 dimensional, in that all the interesting stuff generally takes place on some sort of almost flat plane. A star system is generally on a plane, so is the galactic system, and for most planet+moons too. They just tend to be different planes so for ease of communication you will probably just align your idea of down with whatever the most convenient plane is. This of course is ignoring what gravity down is, as that changes as thrust does.
And as for ship alignment, yeah no one is going to worry about that till its time to dock, at which point the lighter vessel will likely change their orientation since its easier and takes less energy. Spaceships are not going to be within human sight range of each other most of the time, even being in relatively the same are. Space too big and getting ships close to each other is dangerous!
But in media that fucks with people’s idea of meeting and seeing each other so for convenience of not confusing the audience you don’t see that level of realism often.
In more realistic scenarios, “down” is just defined by the direction of thrust. So approaching a ship, they will be down assuming you are decelerating to match their velocity, but they will be up if you are still thrusting towards them.
But all of that has almost nothing to do with how people will think of orientation to other ships since generally speaking you won’t be using eye sight to communicate ship to ship. At that point an agreed upon down will be needed. Probably aligned with galactic or star system to establish a plane, and probably right hand rule to establish up and down. In general given that space is big and ships are small they will just be points on each others radar until they need to dock with each other so it doesn’t really matter how people are actually oriented, as long as when they communicate what they say makes sense to the other side.
edit: or maybe down is towards the currently orbitted gravity well, like towards a planet/moon/star.
Idk about anyone else but its a bit long. Up to q10 i took it seriously and actually looked for ai gen artifacts (and got all of them up to 10 correct) and then I just sorta winged it and guessed and got like 50% of them right. OP if you are going to use this data anywhere I would first recommend getting all of your sources together as some of those did not have a good source, but also maybe watch out for people doing what I did and getting tired of the task and just wanting to see how well i did on the part i tried. I got like 15/20
For anyone wanting to get good at seeing the tells, focus on discontinuities across edges: the number or intensity of wrinkles across the edge of eyeglasses, the positioning of a railing behind a subject (especially if there is a corner hidden from view, you can imagine where it is, the image gen cannot). Another tell is looking for a noisy mess where you expect noisy but organized: cross-hatching trips it up especially in boundary cases where two hatches meet, when two trees or other organic looking things meet together, or other lines that have a very specific way of resolving when meeting. Finally look for real life objects that are slightly out of proportion, these things are trained on drawn images, and photos, and everything else and thus cross those influences a lot more than a human artist might. The eyes on the lego figures gave it away though that one also exhibits the discontinuity across edges with the woman’s scarf.
15 mph is plenty fast enough to belong in the bike lane. You’re good bro.
Thats how its supposed to work and in practice it kinda does, but the people with the money want positive results and the people doing the work have to do what they can to stay alive and relevant enough to actually do the work. Which means that while most scientists are willing to change their minds about something once they have sufficient evidence, gathering that evidence can be difficult when no one is willing to pay for it. Hard to change minds when you can’t get the evidence to show some preconceived notion was wrong.
Also known as PTFE, it is a plastic substance that has an insanely low coefficient of friction and is thus incredibly fucking useful for so many things. And much like the last weirdly good at doing everything substance (asbestos) it turns out it really should not have been put in everything, but its probably not quite as bad as asbestos.
Wdym? Pregnancy is the original lootbox, never know what kind of kids you’re gonna get.
Outside of the costs of hardware, its just power. Running these sorts of computations is getting more efficient, but the sheer amount of computation means that its gonna take a lot of electricity to run.
they know it’s impossible to do
There is some research into ML data deletion and its shown to be possible, but maybe not on larger scales and maybe not something that is actually feasible compared to retraining.
While you are overall correct, there is still a sort of “black box” effect going on. While we understand the mechanics of how the network architecture works the actual information encoded by training is, as you have said, not stored in a way that is easily accessible or editable by a human.
I am not sure if this is what OP meant by it, but it kinda fits and I wanted to add a bit of clarification. Relatedly, the easiest way to uncook (or unscramble) an egg is to feed it to a chicken, which amounts to basically retraining a model.
Always has been. The laws are there to incentivize good behavior, but when the cost of complying is larger than the projected cost of not complying they will ignore it and deal with the consequences. For us regular folk we generally can’t afford to not comply (except for all the low stakes laws that you break on a day to day basis), but when you have money to burn and a lot is at stake, the decision becomes more complicated.
The tech part of that is that we don’t really even know if removing data from these sorts of model is possible in the first place. The only way to remove it is to throw away the old one and make a new one (aka retraining the model) without the offending data. This is similar to how you can’t get a person to forget something without some really drastic measures, even then how do you know they forgot it, that information may still be used to inform their decisions, they might just not be aware of it or feign ignorance. Only real way to be sure is to scrap the person. Given how insanely costly it can be to retrain a model, the laws start looking like “necessary operating costs” instead of absolute rules.
My vote for Biden was an anything but trump vote, but given Biden’s current record as president he has my vote again.
Still not my first choice but we live in a first past the post voting system so gotta take what you can get.
The real AI, now renamed AGI, is still very far
The idea and name of AGI is not new, and AI has not been used to refer to AGI since perhaps the very earliest days of AI research when no one knew how hard it actually was. I would argue that we are back in those time though since despite learning so much over the years we have no idea how hard AGI is going to be. As of right now, the correct answer to how far away is AGI can only be I don’t know.
Five years ago the idea that the turing test would be so effortlessly shattered was considered a complete impossibility. AI researchers knew that it was a bad test for AGI, but to actually create an AI agent that can pass it without tricks still was surely at least 10-20 years out. Now, my home computer can run a model that can talk like a human.
Being able to talk like a human used to be what the layperson would consider AI, now it’s not even AI, it’s just crunching numbers. And this has been happening throughout the entire history of the field. You aren’t going to change this person’s mind, this bullshit of discounting the advancements in AI has been here from the start, it’s so ubiquitous that it has a name.
ChatGPT is amazing for describing what you want, getting a reasonable output, and then rewriting nearly the whole thing to fit your needs. It’s a faster (shittier) stack overflow.
looks like its just setting some events, these two lines should clear the anti-select and the anti-right click respectively if pasted into the debug console:
document.body.onselectstart = undefined document.oncontextmenu = undefined