LLM and image generation
If Sapphire is already calling the device EdgeAI and AMD has given the poor processor the name Ryzen AI 9 HX 370, we have to at least try it out a bit. AMD itself likes to link to the Amuse software, for example, which I have tried out here in version 3.1.

A version of StableDiffusion is running in the background with the default settings.

I tried out various prompts with different levels of detail and got pretty quick results in the balanced preset, but their quality didn’t blow me away.
With the Quality preset and all optional features activated, the whole thing looks much better, but then it takes minutes rather than just a few seconds per image.

Here is the process in real time:
In my opinion, however, the possibility of video generation is much more exciting, because you can now have pictures created anywhere, even online.
But I’ll spare you the real-time recording here, about 11.5 minutes that the EdgeAI needs for the result above.
Now away from the rather mediocre pictures and videos, with LM Studio (v0.3.24) you can also easily run an LLM locally. The setup has been extremely simplified and after just a few clicks you can get started.

Unfortunately, with the open-source models you quickly come across things that work a little better with the commercial offers.
Other things, on the other hand, work very well and I also think the output is fine in terms of speed, considering that it comes completely privately and locally from my device and was not calculated somewhere in the cloud.










































16 Antworten
Kommentar
Lade neue Kommentare
Urgestein
Urgestein
Urgestein
Urgestein
Urgestein
Urgestein
Mitglied
Moderator
Urgestein
Urgestein
Mitglied
Urgestein
Moderator
Mitglied
Neuling
Mitglied
Alle Kommentare lesen unter igor´sLAB Community →