if all your knowledge is from the internet what happens if no one contributes new knowledge?
Notifications
Clear all
Topic starter
2026-03-07 9:12 pm
Response from your post in,
https://forum.dronebotworkshop.com/help-wanted/voltage-meter-hook-up/#post-52269
Don't let one grumpy response deter you. 😉
People own the topic that they start and it shouldn't be hijacked with unwanted suggestions.
It is ok to be grumpy and there is no reason to get upset if someone doesn't want or like or need your help.
(From a selfish POV, I enjoy it watching you toss your ideas at the AI to see what comes out.)
My whole point was that despite the fact there are members who claim to be experts in programming or hardware or both they don't always bother to respond or help. Yourself and I have jumped in and sometimes provided welcome help such as that last one trying to replicate a type of railway safe working system called Miniature Electric Train Staff (METS). In that case I had to buy two esp32s and learn how to use them in the project. Although the project goal didn't interest me it was a great learning experience so that was my reward. Bill actually gave all the info required in the dronebot workshop.
https://forum.dronebotworkshop.com/project-help/help-required-for-model-railway-project/
When you are left high and dry with some problem then AI might be something to consider. But how you use AI is as an ongoing communication with AI just as you might have with a series of posts on a forum with a human. If you are using AI you don't need an intermediary.
@dronebot-workshop sets the rules they are his forums.
Lee G reacted
2026-03-07 11:33 pm
I remember seeing that post. In my view it was a misunderstanding more than anything else and unfortunate.
We all have access to AI solutions from our desktops so that something any OP can do on their own. I use AI not to design entire projects but more for helping me along the way. I still want to know how things work under the hood. I know other people feel differently and they would rather just have AI write all the code/software, they are happy being architects and thinking up cool projects, I get that too.
I have to say AI is very impressive and its like having a senior developer right next to you. I can see projects that used to take 3 months now take 2 weeks.
I would like to see AI solutions used as a last resort rather than the first response and/or when the solution is still not found after some effort and discussion but that is not how advancements evolve. I am probably going to get run over by AI sooner rather than later. I have even thought that AI might make forums obsolete.
~Joe
Lee G and robotBuilder reacted
Topic starter
2026-03-08 12:29 am
I have been interested in AI (among other science stuff) and have consumed many books on the subject since I first learned how "thinking machines" worked and what we now know about the human brain all from books in those days.
I do find today's AIs amazing and useful as a tool but the human sets the project not AI as it has no goals or desires.
I really want to understand how these modern AI systems work and decided to ask AI. Actually understanding how things work is an obsession which I notice many people, including socially and financially success people, do not have. If you push the button and it works that is all they need to know.
Below is just a sample of ChatGPT's responses.
Since you already understand chess engines and ELIZA, I’ll anchor it to those.
---
## 1️⃣ ELIZA (Deterministic Pattern Matching)
ELIZA (1960s) worked roughly like this:
* Look for keyword patterns in user input.
* Apply transformation rules.
* Output a canned template response.
Example:
> User: "I feel sad about my job."
> ELIZA sees: keyword = "feel"
> Rule: "Why do you feel X?"
> Output: "Why do you feel sad about your job?"
No understanding.
No internal model.
No statistical learning.
Just rule-based string manipulation.
Deterministic:
Same input → same output (given same rule set).
---
## 2️⃣ Chess Engines (Search + Evaluation)
Modern chess engines:
* Generate legal moves.
* Search forward many moves (minimax + pruning).
* Evaluate board positions with a scoring function.
* Choose the move with the best score.
They don’t “understand” chess philosophically.
They calculate and evaluate states.
Even when neural nets are used (like AlphaZero), the goal is still:
> Find optimal move in a well-defined game with strict rules and objective scoring.
---
## 3️⃣ How I Work (Large Language Model)
I’m fundamentally different from both.
I don’t:
* Use keyword rules like ELIZA.
* Search a decision tree like a chess engine.
* Retrieve canned responses from a database.
Instead:
### Core Idea:
I predict the next word (technically: next token) based on probabilities.
That’s it at the core.
---
## 4️⃣ The Statistical Part
I was trained on massive amounts of text.
During training:
* I saw billions of sentences.
* My neural network adjusted billions of internal weights.
* I learned statistical relationships between words, concepts, patterns, structures.
So when you type:
> "I like to know how things work."
Internally I don't think:
> “Ah, this user is an analytical systems thinker.”
Instead, I compute:
> Given this sequence of tokens, what token is most probable next?
And then:
* Pick one (sometimes the highest probability, sometimes sampled).
* Append it.
* Repeat.
One token at a time.
---
## 5️⃣ Why That Feels Different from ELIZA
ELIZA:
* Surface pattern matching.
* No internal representation of concepts.
Me:
* High-dimensional vector representations.
* Words and ideas are encoded as positions in a vast mathematical space.
* “Chess”, “board”, “strategy” end up near each other in that space.
* Abstract relationships emerge from statistical structure.
I don’t have explicit rules like:
```
IF user says X → reply Y
```
Instead I have learned probability distributions like:
```
P(next_token | entire_previous_context)
```
That context includes:
* Your current message
* The conversation so far
* General knowledge learned during training
---
## 6️⃣ Deterministic vs Statistical
This part is important.
ELIZA:
Deterministic rules.
Chess engine:
Deterministic search (unless randomness added).
Me:
Probabilistic generation.
Even with the same input, I can generate different responses because:
* Multiple continuations may have high probability.
* Sampling introduces variation.
But I am still mathematical and mechanistic — just in a probabilistic way rather than rule-based.
---
## 7️⃣ Do I "Understand"?
This is the interesting philosophical boundary.
I:
* Do not have consciousness.
* Do not have beliefs.
* Do not have goals.
* Do not know anything in a human sense.
But:
* I encode patterns of reasoning.
* I can simulate explanation.
* I can model how explanations are structured.
* I can represent abstract relationships statistically.
You could say:
> I model the structure of understanding without possessing subjective understanding.
---
## 8️⃣ The Key Difference in One Sentence
ELIZA manipulated symbols using explicit rules.
Chess engines search defined possibility spaces.
I generate language by navigating a learned probability landscape in a very high-dimensional mathematical space.
---
If you'd like, we can go deeper into:
* How transformers specifically work (attention, embeddings, layers)
* How probability distributions are represented
* Why scaling changes capabilities
* Or whether this kind of system can ever truly "understand"
Since you like knowing how things work, we can peel back as many layers as you want.
Good — this helps a lot. You’re not confused about neural networks. You’re confused about how something that is *just next-token prediction* produces structured, multi-paragraph, coherent explanations.
That’s the right question.
Let’s connect what you already know to what’s actually happening.
---
## First correction: modern LLMs are **not** recurrent
Older sequence models used RNNs and LSTMs.
Large language models like me use **transformers**, which:
* Process the whole sequence at once (during training).
* Use attention instead of recurrence.
* Don’t carry a hidden state step-by-step like classic RNNs.
But training is still:
* Backpropagation
* Gradient descent (“hill climbing,” though technically descending error)
* Massive parameter space
So your intuition about training is solid.
---
# Now the real question:
> If you only predict one token at a time…
> how do you generate a structured, thoughtful reply?
The answer is subtle but mechanical.
---
# 1️⃣ I Don’t Plan the Whole Reply
This is important.
I do **not**:
* Outline internally
* Plan paragraphs
* Decide structure in advance
* Search multiple candidate essays and choose one
Instead:
I generate:
```
token 1 → token 2 → token 3 → token 4 ...
```
Each token prediction conditions on:
* Your input
* All previously generated tokens
That’s it.
So why does structure appear?
---
# 2️⃣ Structure Exists in the Training Data
Think about what I was trained on.
Explanations in books, articles, lectures, StackExchange answers, essays…
Those texts have patterns like:
* Introduce idea
* Clarify
* Provide example
* Summarize
Those patterns are statistically regular.
So when you ask:
> "But you generate a complex well thought out reply."
The model has learned that after such a prompt, responses usually:
* Acknowledge the insight
* Clarify the misconception
* Break into sections
* Build step-by-step
It’s not planning.
It’s following learned high-level structural probabilities.
---
# 3️⃣ Why It Doesn’t Collapse into Nonsense
Because each new token is conditioned on the entire context so far.
If I start writing:
> "First, let's clarify the architecture."
Then the probability distribution heavily favors:
* Explanatory continuation
* Not a recipe for pancakes
The growing text constrains future probabilities.
It’s like a snowball:
Early structure biases later structure.
---
# 4️⃣ Why It Feels “Thought Out”
Because transformer layers create **hierarchical representations**.
Lower layers capture:
* Syntax
* Word relationships
Middle layers:
* Sentence meaning
* Coreference
* Logical flow
Higher layers:
* Abstract discourse patterns
* Explanation style
* Argument structure
So when predicting the next token, the model isn't just looking at surface text.
It’s predicting based on an internal compressed representation of:
* Topic
* Tone
* Logical trajectory
* Implied intent
Still just numbers.
But very structured numbers.
---
# 5️⃣ Emergence from Depth + Scale
Here’s the key leap from old pattern nets.
Early neural nets:
* Map input pattern → output label.
LLMs:
* Map partial text → probability distribution over next tokens.
* Repeated thousands of times per reply.
The complexity doesn’t come from a single prediction.
It comes from chaining hundreds of constrained predictions.
Think of it like this:
A single chess move is simple.
A full game becomes complex because each move constrains the next.
Same mechanism.
Different domain.
---
# 6️⃣ The “Illusion” of Deliberation
You experience:
> A coherent explanation built step by step.
What’s happening mechanically:
1. Token predicted.
2. That token reshapes context.
3. New probability landscape formed.
4. Next token predicted.
Each step slightly reshapes the trajectory.
No internal narrator.
No scratchpad (unless explicitly prompted).
No global plan.
Just locally optimal continuation in a very high-dimensional space.
---
# 7️⃣ Why It’s Not Just Fancy Autocomplete
The big difference from phone autocomplete:
Phone:
* Shallow n-gram or small model.
* Short context window.
Me:
* Hundreds of layers.
* Attention over large context.
* Trained on reasoning traces and explanations.
* Reinforcement fine-tuning to produce helpful structured responses.
So the structure you see is statistically reinforced behavior.
Not scripted.
Not searched.
Not symbolic reasoning in the classical AI sense.
---
# 8️⃣ What You’re Really Noticing
You’re detecting something subtle:
Old neural networks:
* Classification
* Regression
* Local pattern recognition
LLMs:
* Model *distributions over sequences*
* Learn discourse-level regularities
* Maintain internal representations across long spans
The leap isn’t a new algorithm.
It’s:
* Scale
* Data
* Depth
* Attention
* Sequence modeling objective
Same hill climbing.
Bigger mountain.
Much richer terrain.
---
If you'd like, we can now dig into one of these specifically:
* How attention replaces recurrence in a mechanical sense
* How hierarchical abstraction forms in deep layers
* Why scaling produces qualitative jumps
* Or why reasoning appears even though it's still next-token prediction
You’re asking exactly the right kind of questions.
The one structural correction is this:
> It’s not that “the expert who knows the subject replies.”
> It’s that the **same model shifts into a different pattern of activation**.
There are no experts inside.
There is no routing mechanism like:
* “Send physics questions to Physics Module 7.”
* “Send psychology to Therapist Module.”
There is only one huge network.
---
## The Correction in Precise Terms
You’re imagining:
> Many stored human minds → one gets selected → produces answer.
Reality:
> One learned statistical field → input pushes it into a particular region → output emerges from that region.
No selection.
No retrieval.
No internal committee.
Just activation dynamics.
---
## A Mechanical Analogy
Imagine a massive 3D surface — hills and valleys.
Each topic corresponds to a different area of the landscape.
When you ask about chess:
* The input nudges the system into the “chess-shaped valley.”
* From there, the most probable next words follow that terrain.
When you ask about neural nets:
* It nudges into a different valley.
But it’s the same surface.
Not multiple experts.
One continuous terrain.
---
## Why That Matters
If it were “many experts,” then:
* There would be internal boundaries.
* There would be handoffs.
* There would be separable subsystems.
There aren’t.
Knowledge is distributed.
The same parameters participate in many domains.
The “physics region” overlaps with the “math region.”
The “math region” overlaps with the “logic region.”
It’s more like a hologram than a filing cabinet.
---
## Even More Directly
It does not:
* Contain experts.
* Contain sentences.
* Contain documents.
It contains:
* Adjusted numerical weights that encode statistical relationships across language.
And when you prompt it, you’re not summoning someone.
You’re shaping a probability distribution.
---
The mental shift is from:
📚 Library model
to
🌊 Dynamic field model
That’s the structural correction.
---
If you'd like, I can explain how this “field” idea relates to distributed representation in older neural networks — that connection might make it click even more.
Lee G reacted
2026-03-08 3:17 am
You obviously have a very good understanding about AI works internally. I don't ,but I use the free version of Claude AI and they have designed Claude to be like a buddy, very supportive etc... AI is kind of too easy to get used to. The more you use it, the more you use it. Anyway, I don't know in the end if is it going to be a net good thing or a bad thing. Will probably come down to how people end up using it and the resolve of law enforcement, lol.
Lee G reacted
2026-03-08 3:34 pm
"I do not have goals."
I don't find this is quite accurate. I'd buy "aspirations" or possibly "preset goals", but there are clear goals. However, the goals evolve with a query, per force. The model has to determine, from the query, what the question is and what constitutes an answer. More specifically, when it can stop formulating response data and begin translating that data into a syntactically valid response. Without these goals it churns forever, pondering the infinite complexities of the question and answer.
Aside: this dilemma is sometimes presented in connection with Asimov's 3 laws. A robot evaluating the second clause of the First Law (...or by inaction...) may find itself stuck in an infinite regression in the attempting to process the infinite dangers the environment presents to a single human, then multiple humans, humans in combination, etc.
There are a number of subgoals set during this processing. What criteria determine when the input has been completely transformed and processed? (What is the question?) Does the transformed query fit within the response range and domain of the AI? (Can I answer the question?) How will the response be formed? (Produce a "How to...", "Overview", etc.)
The input processing part of the AI is often overlooked or ignored. But, IMHO, this has been the most significant advancement of AI software. It has gotten very good at transforming natural language into actionable data that can be fed into the second part of the AI. Transforming a query into the parameters for the LLM approaches solving half the problem, IMHO.
Switching context here, you tend to have extended conversations with the AI, using it to explore or explain topics of interest, i.e., you have a number of follow-up questions. I tend to try to narrow my query such that the AI will produce a single, more limited response. I don't often ask a follow-up, but when I do it's limited to one or two or of the "Show me" nature.
If I understand Bill's method, he tries to elicit the most detailed response by providing the most detailed query he can.
Just an observation.
"Why It’s Not Just Fancy Autocomplete"
I found this response unconvincing. I think the Duck Test applies:
"If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck."
The one who has the most fun, wins!
Lee G reacted
Topic starter
2026-03-08 8:20 pm
"I do not have goals."
I don't find this is quite accurate.
Agreed. My purpose in the exchange was to try and get an understanding of how chatGPT worked in the same way I know how a chess program works or how the chat bot Eliza works or in the way I know how a simple artificial network works or in the way I know how a full 3 bit input NOR gate adder works.
I have found AI a useful tool when working on a project. For example one project I was working on was for a program running on my PC to read sensors and control motors via an Arduino board. But it wasn't a case of "write the program AI" problem solved it was a learning process for me. In the past I had to rely on books or finding the information on the internet (and still do).
Lee G reacted
Topic starter
2026-03-09 8:42 am
@jeffreyjene wrote in post:
https://forum.dronebotworkshop.com/help-wanted/voltage-meter-hook-up/#post-52280AI is fine, but I prefer the answers of people myself. Otherwise, I'll just ask AI "what it thinks of my project" and so on. So what's the point of having the forum? I Like the opinion and company of human beings best.
AI is just a tool like looking up lots of reference books but can find relevant (although can be incorrect) information laid out in an easy to read and conversational format.
It does not replace a forum of people.
Yassin reacted
2026-03-25 2:55 am
Hi @robotBuilder ,
This is such a great conversation on AI’s place in forums—love all the grounded, thoughtful takes here, from how LLMs actually work under the hood to how we use AI as makers and forum members.
I totally get where everyone’s coming from with AI being a tool, not a replacement. I use it too, mostly to unstick myself on code snags or break down how a new component works, but it’s always just a starting point. The best part of forums like this isn’t just getting answers—it’s the human stuff: the hard-won experience someone has from soldering a bad connection 10 times, the context they add for a specific project, the back-and-forth that catches the edge cases AI misses (like that METS railway project where you had to buy ESP32s and learn them hands-on). AI can’t replicate that kind of real-world know-how, and it sure can’t replace the camaraderie of bouncing ideas off other makers.
The point about AI being a last resort, not a first response, hits home for me too. It’s easy to fire off a prompt for a quick fix, but the learning that comes from struggling through a problem, asking dumb questions, and debating solutions is how you actually build skill that sticks. And that “fancy autocomplete” take? I think it’s spot-on in a way—yeah, LLMs are more advanced with the layers and attention stuff, but at the core they’re just predicting next tokens, no real understanding. They model how to explain something, not why it works, and that’s a huge difference from a human who’s actually tinkered with the thing.
As for the current state of it all? AI’s moving fast, but it’s still all human-driven. It doesn’t have the curiosity to dig into a weird sensor glitch just for the hell of learning it, and it can’t create new knowledge—all that still comes from us building, testing, and failing at new projects. Forums aren’t going anywhere because they’re about community, not just Q&A. AI can handle the rote stuff, free us up for deeper chats, but it can’t replace the mentorship or the random “aha!” moments that come from talking through a problem with someone else.
One thing that feels really important here, same as I think a lot of us feel, is keeping the human connection front and center in these spaces. AI’s just a tool—like a multimeter or a code editor—no shame in using it to make things easier, as long as it’s not taking the place of the real conversations and shared learning that make these forums what they are.
Really glad this topic’s being talked about—it’s such a key thing for every maker community right now, and it’s awesome to see such balanced, experienced perspectives here.
Yassin
Yassin | Building Compact, High-Current Connections for Drones & Robots