Actually we know the system prompt. It doesn’t have “I am a sentient being” anywhere in it. Stop making stuff up.
- 0 Posts
- 94 Comments
Reading back over this I think you have me confused with another commentor. I don’t mention anything about IF in the commend you are replying to. Someone else did though.
Yes genetic algorithms are something different. Though they are used sometimes in training or architecting NNs, but not at the scale of modern LLMs.
Fyi you can have all or nothing outputs from a perceptron or other network. It all depends on the activation function. Most LLMs don’t use that kind of activation function, but it is possible. Have you heard of bitnet? They use only one of three states for the neuron output in an LLM. It’s interesting stuff.
Kinda but also no. That’s specifically a dense neural network or MLP. It gets a lot more complicated than that in some cases.
It’s only one type of neural network. A dense MLP. You have sparse neural networks, recurrent neural networks, convolutional neural networks and more!
Not all machine learning is AI. There are plenty of Machine Learning algorithms like Random Forests that are not neural networks. Deep learning would be big neural networks.
To be more specific this is an MLP (Multi-Layer Perceptron). Neural Network is a catch all term that includes other things such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Diffusion models and of course Transformers.
What you are arguing online is some variant of a Generative Pre-trained Transformer, which do have MLP or MoE layers but that’s only one part of what they are. They also have multi-headed attention mechanisms and embedding + unembedding vectors.
I know all this and wouldn’t call myself a machine learning expert. I just use the things. Though I did once train a simple MLP like the one in the picture. I think it’s quite bad calling yourself a machine learning expert and not knowing all of this stuff and more.
NotANumber@lemmy.dbzer0.comto
Technology@lemmy.world•DirecTV screensavers will show AI-generated ads with your face in 2026English
2·2 months agoWhat movie is this?
It’s not even my claim you are talking about jackass. Read the usernames. If you have fallen into the rabbit hole that is Lemmy you should have been around enough to know about recapcha. If not it’s one DuckDuckGo search away. In fact you could just click the link on the recapcha itself that explains how they use the data for training. Hardly arcane knowledge.
Your comment to me read like Sealioning.
Google recaptcha? They literally talk about this publically. It’s in their mission statement or whatever. It’s used to train other kinds of model too.
If it really is a propaganda, infiltration, and pressure tactic then none of those things justify it’s existence. Of course it might not just be that and reality is complicated. Either way promoting interventionism from a country with such a horrific record as the USA is a bad idea.
I could be being daft here but I thought USAID was a propaganda and infiltration tactic. Like a way for the USA to put pressure on other countries.
A techbro? Do you think I work for some big company? I am a PhD student motherfucker.
well that settles it then! you’re apparently such an authority.
I am someone who is paid to research uses and abuses of AI and LLMs in a specific field. So compared to randos on the internet like you, yeah I could be considered an authority. Chances are though you don’t actually care about any of this. You just want an excuse to hate on something you don’t like and don’t understand and blame it for already well established problems. How about instead you actually take some responsibility for the state of your fellow human beings and do something helpful instead of being a Luddite.
I don’t trust OpenAI and try to avoid using them. That being said they have always been one of the more careful ones regarding safety and alignment.
I also don’t need you or openai to tell me that hallucinations are inevitable. Here have a read of this:
Title: Hallucination is Inevitable: An Innate Limitation of Large Language Models, Author: Xu et al., Date: 2025-02-13, url: http://arxiv.org/abs/2401.11817
Regarding resource usage: this is why open weights models like those made by the Chinese labs or mistral in Europe are better. Much more efficient and frankly more innovative than whatever OpenAI is doing.
Ultimately though you can’t just blame LLMs for people committing suicide. It’s a lazy excuse to avoid addressing real problems like how treats neurodivergent people. The same problems that lead to radicalization including incels and neo nazis. These have all been happening before LLM chatbots took off.
NotANumber@lemmy.dbzer0.comto
Fediverse@lemmy.world•Microsoft doesn't understand the FediverseEnglish
1·3 months agoI am sure the terminal IDEs are great. I did used to play around with vim myself, and still use it for editing config files. I have had some success with Jet Brains as well. It’s a solid product.
I don’t really have the energy it takes to configure and learn all the stuff that’s needed for a terminal only setup these days. I guess I am just not as discerning as you are. I might try a ready made solution like LazyVim.
The 50s? Did LLMs exist in the 50s?
This is why safety mechanisms are being put in place, and AIs are being programmed that act less like sycophants.
So far there have been about two instances of this happening from two different companies. Already there is a push for better safety by these companies and AIs that act less like sycophants. So this isn’t the huge issue you are making it out to be. Unless you have more reports of this happening?
Ultimately crazy people gonna be crazy. If most humans are as you say then we have a more serious problem than anything an AI has done.



Sugar isn’t a drug lol