Notifications
Clear all

RADXA X2L, 8Gb ram, nVME, Single Board Computer with Pico 4020 chip & GPIO.

27 Posts
4 Users
1 Reactions
1,131 Views
WhitneyDesignLabs
(@whitneydesignlabs)
Member
Joined: 4 years ago
Posts: 115
Topic starter  

Hi Forum Folks,

Does anyone here have experience with the RADXA X2L SBC yet?  I have one on order, and will report back here my experiences.

My robotic experiments have led me to using a Raspberry Pi 5, 8Gb, running local LLMs, but ran into power issues with the only mobile option using an automotive inverter and the factory power supply.  Not ideal for robotics. Anything less, even top-shelf, Anker brand USB C power bricks, can't run the Pi5 at full CPU without power errors. (if someone has a solution for this, there is a very long and detailed thread about it on the forum)

Therefore, I am looking forward to running the RADXA X2L. It has 8Gb ram, an Intel CPU, and integrated Pico 4020 silicon with a standard 40 pin GPIO header. (YAY!). This means I should be able to run any Linux distro IU want (or even Windows...), power it all from a DC source, and have GPIO all on a single board. A wifi/ble board, nvme drive and other peripherals can be added.

For robotics, I think this is a BIG deal.  The Intel CPU and 8Bg of RAM should be able to run a quantized Large Language Model for AI inference, reasonably well. GPIO should allow hardware integration easily.  A pair of these could work in tandem, one for LLM processing, another for vision/speech etc. Intercommunication between boards could be ethernet, serial, MQTT etc.  Sensors and automation could be SPI, I2C, serial, PWM etc... Or all of the above.  A cluster of 4, 8 etc?  That's a lot of compute and gpio for a robot.  Did I mention, they are super affordable?

Thoughts? Ideas? Excited as I am?

Scott

This topic was modified 5 months ago 2 times by WhitneyDesignLabs

Imagine by thought, create, don't wait, Scott.


   
Quote
Ron
 Ron
(@zander)
Father of a miniature Wookie
Joined: 4 years ago
Posts: 7591
 

Why not just drop back to a Pi4, there isn't that much difference between a 4 and a 5.

First computer 1959. Retired from my own computer company 2004.
Hardware - Expert in 1401, and 360, fairly knowledge in PC plus numerous MPU's and MCU's
Major Languages - Machine language, 360 Macro Assembler, Intel Assembler, PL/I and PL1, Pascal, Basic, C plus numerous job control and scripting languages.
My personal scorecard is now 1 PC hardware fix (circa 1982), 1 open source fix (at age 82), and 2 zero day bugs in a major OS.


   
ReplyQuote
WhitneyDesignLabs
(@whitneydesignlabs)
Member
Joined: 4 years ago
Posts: 115
Topic starter  

@zander Pi4 has not nearly enough compute to handle local LLM processing in reasonable speed.

Imagine by thought, create, don't wait, Scott.


   
ReplyQuote
Ron
 Ron
(@zander)
Father of a miniature Wookie
Joined: 4 years ago
Posts: 7591
 

@whitneydesignlabs I was thinking the 5 was only 20% faster, but I just googled it and it says 2x to 3x, so yes, much faster.

I thought we had sorted the Pi5 power situation, but maybe not. I just hope you don't run into very many issues with the new board when you can get one, I have never heard of it before so there is a good chance no one here has any experience. Maybe RADXA has a forum. Good luck.

First computer 1959. Retired from my own computer company 2004.
Hardware - Expert in 1401, and 360, fairly knowledge in PC plus numerous MPU's and MCU's
Major Languages - Machine language, 360 Macro Assembler, Intel Assembler, PL/I and PL1, Pascal, Basic, C plus numerous job control and scripting languages.
My personal scorecard is now 1 PC hardware fix (circa 1982), 1 open source fix (at age 82), and 2 zero day bugs in a major OS.


   
ReplyQuote
WhitneyDesignLabs
(@whitneydesignlabs)
Member
Joined: 4 years ago
Posts: 115
Topic starter  

@zander Thanks, Ron. Nope the problem with the Pi5 has not been sorted yet. I left off about 4 months ago with my best solution using a small automotive inverter and the Rpi5 factory (non standard USB-C) power supply. It was a work-around, but kludge. Admittedly I have not researched/shopped for new DC power options for the Pi5 in a few months, so there may be new options now. This Intel based CPU with 40 pin GPIO SBC has a lot of advantages for my use case. I am hopeful. Will report back, for sure.

Imagine by thought, create, don't wait, Scott.


   
Ron reacted
ReplyQuote
noweare
(@noweare)
Member
Joined: 4 years ago
Posts: 131
 

@whitneydesignlabs

I will have to get one of these at some point. But, I would not know where to start. As a robotic board do you use some kind of embedded  OS ?. I would think running a full blown os like ubuntu or windows would eat up resources for other things.

I used to be subscribed to a YT channel called "Explaining Computers" and each time a SBC was reviewed it was using a linux distro. My feeling was that SBCs are not used desktops so how are they really used and what is the development environment. The things I like to create are based on embedded hardware with no operating system at all. Someday I would like to check out SBC's to see what I'm missing. 

 


   
ReplyQuote
WhitneyDesignLabs
(@whitneydesignlabs)
Member
Joined: 4 years ago
Posts: 115
Topic starter  

Posted by: @noweare

@whitneydesignlabs

As a robotic board do you use some kind of embedded  OS ?. I would think running a full blown os like ubuntu or windows would eat up resources for other things....

I used to be subscribed to a YT channel called "Explaining Computers" ... The things I like to create are based on embedded hardware with no operating system at all. ...

 

I don't plan to run a specialized Robotic OS like ROS. Someday, maybe, but that learning curve is beyond my availability.  I'll probably run Linux Mint. It should be able to run it just fine, including 4k video, which I won't need on this but could, if I wanted to.  Windows 10 should be fine, too. Might make the system a dual boot just for testing. I will put a 2Tb ssd in it, so storage should be o'plenty, even with many Gigs of LLM models.

I love "Explaining Computers"! He is a great teacher, almost as good as our host, Bill. 🙂 As far as I know, he has not reviewed a RADXA yet.  But other YouTubers have if you want to check it out. But I have not seen any videos running an LLM on it. I will test llama3-7B, and may document it on YouTube.

I share your love for embedded systems. Think of this RADXA board as a full computer, w/ mouse, keyboard, hdmi display, Gbit ethernet, wifi, bluetooth etc. It can run your favorite IDE, on your favorite OS, AND has a Raspberry Pi Pico embedded into it for all your control system GPIO geekabilities.  DC input and low-ish power consumption.

(in other words, it kind of IS an embedded system, with the dev computer, IDE, file storage etc all on the one pcb) 

 

This post was modified 5 months ago by WhitneyDesignLabs

Imagine by thought, create, don't wait, Scott.


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1851
 

Hi @whitneydesignlabs,

  I haven't even begun to look into LLMs, so maybe I have completely the wrong impression, but https://huggingface.co/blog/llama3 says

Llama 3 comes in two sizes: 8B for efficient deployment and development on consumer-size GPU, and 70B for large-scale AI native applications.

So isn't that something like a video GPU card with around 8GByte Vram, plus a respectable Intel I7 or thereabouts workstation to host it, not a Celeron in a tiny box, with 8Gb of RAM?

Also, I am not sure your SSD is going help that much with LLMs, as I have a feeling the large GB figures all need to be in very fast memory, directly in front of the processors, not hiding in an SSD.

I may be completely wrong, and of course it depends what you are trying to do, but I would strongly advise you double check that what you are proposing is feasible, before you spend too much cash.

If I am mistaken, then my apologies, but please share the evidence, because I, and maybe others, would be interested to know, for the future, even though I am not immediately planning to follow it up.

---------------

Apologies if I have misunderstood your intention, and you are intending to use some cloud resources to do the heavy work, whilst  you are trying to link up in a manner more akin to chatGPT or Copilot, etc. I thought local hosting was the intention of Llama, especially the smaller models, but I could well be confused. Again any clarity you can offer is welcomed.

Best wishes,

Dave

 


   
ReplyQuote
WhitneyDesignLabs
(@whitneydesignlabs)
Member
Joined: 4 years ago
Posts: 115
Topic starter  

Posted by: @davee

Hi @whitneydesignlabs,

  I haven't even begun to look into LLMs, so maybe I have completely the wrong impression, but https://huggingface.co/blog/llama3 says

Llama 3 comes in two sizes: 8B for efficient deployment and development on consumer-size GPU, and 70B for large-scale AI native applications.

So isn't that something like a video GPU card with around 8GByte Vram, plus a respectable Intel I7 or thereabouts workstation to host it, not a Celeron in a tiny box, with 8Gb of RAM?

Also, I am not sure your SSD is going help that much with LLMs, as I have a feeling the large GB figures all need to be in very fast memory, directly in front of the processors, not hiding in an SSD.

I may be completely wrong, and of course it depends what you are trying to do, but I would strongly advise you double check that what you are proposing is feasible, before you spend too much cash.

If I am mistaken, then my apologies, but please share the evidence, because I, and maybe others, would be interested to know, for the future, even though I am not immediately planning to follow it up.

---------------

Apologies if I have misunderstood your intention, and you are intending to use some cloud resources to do the heavy work, whilst  you are trying to link up in a manner more akin to chatGPT or Copilot, etc. I thought local hosting was the intention of Llama, especially the smaller models, but I could well be confused. Again any clarity you can offer is welcomed.

Best wishes,

Dave

 

Hi Dave, llama3 8B refers to the 8 Billion parameters it was trained on. The actual file size when you download the entire model, is 4.7Gb.  To run the model, I use ollama.ai in a cli. This approach works on a Raspberry Pi 5 with 8Gb ram.  While LLMs will run faster on a system with a GPU, like nVidea, it can also run on a CPU. (amazingly).

The large ssd just makes it possible to store multiple models in the 4-7Gb file size. I might have a library of a dozen or so LLMs I am experimenting with, maybe more. So the storage is key there.  Only one model would be running at a time. And as mentioned above, an 8Billion parameter model, will run in CPU mode with 8Bg of ram. Smaller models run faster. Larger than 8B becomes unwieldy on the Pi's ARM processor.

Which is why I am excited to try all this on the Intel processor SBC. 

You are correct, though, it is my experience and intention to be run the LLMs 100% locally, no Internet needed, no APIs to OpenAI etc. 

I hope that helps explain and clarify my intentions and gives an overview on all this local LLM stuff. 🙂 I will post some results of LLM running on a Pi 5 8Gb as a follow-up.

 

Imagine by thought, create, don't wait, Scott.


   
ReplyQuote
WhitneyDesignLabs
(@whitneydesignlabs)
Member
Joined: 4 years ago
Posts: 115
Topic starter  

I just ran llama3 on the Pi5/8Gb. It took 20 seconds for the inference to start, then streamed out the following text which which seemed to be complete in thought and task at about 6 minutes or so. But it kept rambling for another 4 minutes, so I stopped it with a ctrl-c.

*******
>>> Explain how the GPIO of a Raspberry Pi Pico work in simple terms.
assistant

The GPIO (General Purpose Input/Output) on a Raspberry Pi Pico is like a
special set of digital switches that you can control from your code.

Here's how it works:

**What are GPIOs?**

GPIOs are tiny electrical connections on the Pico that let you connect
external devices, like sensors, LEDs, motors, or even other computers.
Think of them as digital "fingers" that can touch or release a connection
to send signals between devices.

**How do I control GPIOs?**

You can use your code (written in languages like Python, C++, or
JavaScript) to tell the Pico what to do with each GPIO. Here are some
basic actions:

1. **Input**: Read the state of an external device connected to a GPIO.
For example, if you connect a button to a GPIO, you can read whether it's
pressed or not.
2. **Output**: Send a signal to an external device connected to a GPIO.
For instance, you could turn an LED on or off by sending a high or low
voltage signal.

**GPIO Modes**

GPIOs have different modes that control how they behave:

1. **Digital Input**: Read the state of an external digital device (e.g.,
a button).
2. **Digital Output**: Send a digital signal to an external device (e.g.,
turn an LED on or off).
3. **Analog Input**: Read the analog value from a sensor connected to a
GPIO.
4. **Analog Output**: Send an analog signal to an external device (rarely
used).

**Important Things to Know**

* Each GPIO has a specific function and can only do one thing at a time.
* Some GPIOs are more powerful than others, with features like interrupts
or pulse width modulation.
* Be careful when working with high-voltage devices or electrical signals!

In simple terms, the GPIO on your Raspberry Pi Pico is like a special set
of digital "fingers" that you can control from your code to interact with
external devices. With some basic understanding of how they work and what
modes are available, you'll be ready to start building cool projects with
your Pico!assistant

I'm glad I could help simplify the concept of GPIOs on a Raspberry Pi
Pico!

To recap:

* GPIOs (General Purpose Input/Output) are tiny electrical connections
that allow you to interact with external devices.
* You can control GPIOs using code, telling them what to do in terms of
input or output.
* There are different modes for GPIOs, including digital input/output,
analog input/output, and more.

Remember to be mindful of the limitations and capabilities of each GPIO
when working on projects. With a solid understanding of how they work,
you'll be ready to start building some amazing projects with your
Pico!assistant

You're welcome!

I hope this explanation helped clarify the concept of GPIOs on a Raspberry
Pi Pico for you. If you have any more questions or need further
clarification, feel free to ask!

Remember, understanding GPIOs is just the first step in unleashing the
full potential of your Pico and building amazing projects. Happy
coding!assistant

I'm....

*******
CTRL-C
At this point I cut it off because it seemed to be in an endless loop. Clearly there is work to be done. But it demonstrates what is possible. This is where I am hoping the new board can do it a bit faster and better. What I find interesting is that the machine knows about itself, in a way, and can even assist a human in programming, wiring and configuring.  And as noted above, this is all being processed 100% locally on the Pi, without cloud resources, no Internet required.
This post was modified 5 months ago by WhitneyDesignLabs

Imagine by thought, create, don't wait, Scott.


   
ReplyQuote
Ron
 Ron
(@zander)
Father of a miniature Wookie
Joined: 4 years ago
Posts: 7591
 

@whitneydesignlabs The trouble with using a cloud resource is the very slow communication speeds. I have WiFi6e which in my home gets as fast as 2.5 GBPS, but once it leaves the building, it is subjhect to whatever old gear is in the path it takes to get to another fast node. It could be as bad as 56KB.

First computer 1959. Retired from my own computer company 2004.
Hardware - Expert in 1401, and 360, fairly knowledge in PC plus numerous MPU's and MCU's
Major Languages - Machine language, 360 Macro Assembler, Intel Assembler, PL/I and PL1, Pascal, Basic, C plus numerous job control and scripting languages.
My personal scorecard is now 1 PC hardware fix (circa 1982), 1 open source fix (at age 82), and 2 zero day bugs in a major OS.


   
ReplyQuote
WhitneyDesignLabs
(@whitneydesignlabs)
Member
Joined: 4 years ago
Posts: 115
Topic starter  

@zander Yep, agreed. My robotic experiments are organized whereby everything has to work independent of Internet. If Internet is available, then added functionally is available. I wrote a script to ping Internet on interval, and my MQTT server sends a periodic message as to the availability of Internet. Then all my other scripts can behave appropriately based on Internet access.

For example, with text to speech, if Internet is available, I call an Eleven Labs API for voice processing. If not, voice processing is done locally. However, I will be changing that process soon, to Piper, which will uses neural AI for processing and then TTS will be 100% local, regardless of Internet.  That is just an example.  Same will be true for language processing. Primary LLM will be local, but when available, cloud based inference could be used if appropriate.

In a nutshell, if compute can't be done locally, at least at some reasonable quality level (and by quality, I mean mostly speed), then I am not going to do it. I don't want the systems to be Internet dependent for core robotic functions.

This post was modified 5 months ago by WhitneyDesignLabs

Imagine by thought, create, don't wait, Scott.


   
ReplyQuote
WhitneyDesignLabs
(@whitneydesignlabs)
Member
Joined: 4 years ago
Posts: 115
Topic starter  

@davee Here is a better example addressing your request to share the evidence.  Hardware: Raspberry Pi 5, 8Gb, Raspberry Pi Pico connected to Pi5 via USB. Software: Pi OS, Ollama, Thonny IDE. LLM: llama3. Task: Use imbedded AI (no cloud resources) on Pi5 to help a human program the Pi5 for GPIO automation.  Results:

>>> You are an expert programmer in MicroPython. Write me the shortest, cleanest
... snippet possible to blink the onboard LED of a Pico.
Here is the shortest, cleanest snippet possible to blink the onboard LED
of a Raspberry Pi Pico using MicroPython:

```python
import machine
import utime

led = machine.Pin('LED', machine.Pin.OUT)
while True:
led.on()
utime.sleep(0.5)
led.off()
utime.sleep(0.5)
```

Let me explain what this code does:

1. We import the `machine` module, which provides access to the Pico's
hardware peripherals (in this case, the LED), and the `utime` module,
which provides a way to work with time (specifically, we use it to create
a delay).
2. We define the onboard LED as a `Pin` object using the `machine.Pin()`
function. The first argument is the name of the pin ('LED' in this case),
and the second argument is the mode of the pin (in this case, `OUT`, which
means it's an output).
3. We enter an infinite loop (`while True:`) to keep blinking the LED.
4. Inside the loop, we turn on the LED using its `on()` method.
5. Then, we create a delay of 0.5 seconds using the `utime.sleep(0.5)`
function.
6. Next, we turn off the LED using its `off()` method.
7. Finally, we create another delay of 0.5 seconds to complete one blink
cycle.

Run this code on your Pico, and you should see the onboard LED blinking at
a rate of once per second!

>>>
Use Ctrl + d or /bye to exit.
>>> Send a message (/? for help)

This took a few minutes to infer, but I did not run stopwatch for exact elapsed time.  I then copy and pasted the code into Thonny and hit run. The onboard LED blinks.

This setup simulates the RADXA by having a Pico attached to an SBC. It demonstrates that the machine can assist in coding itself. (scary, maybe yes, but we will leave the philosophy aside for a moment...)

My use case is robotic hardware with natural language interface, that can me help build the robot itself by assisting with coding, planning, summarizing work flow, drafting change logs etc...

When the RADXA arrives from the PRC, I will repeat the above, but hopefully with faster inference response time due to the Intel CPU vs. ARM CPU. Summary of goal: DC powered, mobile robotic platform, no cloud required, LLM interface, TTS & STT.

Scott

Imagine by thought, create, don't wait, Scott.


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1851
 

Hi @whitneydesignlabs,

   Thanks for your replies ... an interesting start, I agree, and more than I was expecting. I hope you will continue to report your progress. At the moment, I am trying to gauge what is realistically possible for very little cash, and as you also indicate, without being bound to cloud based computing resources. I am intrigued as to what capability, in terms of results, you are realistically hoping to achieve.

Thanks once again, and best wishes,

 Dave


   
ReplyQuote
WhitneyDesignLabs
(@whitneydesignlabs)
Member
Joined: 4 years ago
Posts: 115
Topic starter  

@Davee, I'd say this setup is around $250-$300.

Raspberry Pi version:

$80 Pi5 8Gb
$20 Case/cooling
$120 nVME ssd, 2Tb (optional)
$4 Pico
$20 shipping/misc.
$244 Total

RADXA version:

$72 RADXA SBC rp0, 8Gb
$10 Cooling
$10 wifi/ble (optional)
$120 nVME ssd, 2Tb (optional)
$35 shipping/misc.
$247 Total

I have not included power supply or battery in this summary, presumably this would be handled on another part of the robotic system. But maybe throw $100+ for a decent size power bank to include power.

As you can see, the Radxa gets price competitive to the Pi, especially if it can run more reliably on a DC power source, and if its CPU is more bang for the buck. Just my quick analysis. But I think anyone should be able add a local LLM to their projects for two and a half Benjamin Franklins, or less if you omit the ssd and run on sd card.

I will post test results when I have hardware in hand.  Maybe I'll do a side by side comparison video short. That would be fun.

Thanks for your interest in this project.

This post was modified 5 months ago 2 times by WhitneyDesignLabs

Imagine by thought, create, don't wait, Scott.


   
ReplyQuote
Page 1 / 2