Developer fighting 502s from Lemmys Servers.
“Fill in the blank” is now available, just got done coding it.
If you want to try it out, I created a new course “Testing out new question types”.
Thanks, I’ll check it out.
Yeah agree, I’ll definitely implement that one.
Right now I’m working on “match the cards”.
Edit: For audio I’m not so sure on how I would do it. I don’t think most people would record it themselves when creating a course so I would need to generate it. Then you’d have the issue about correct pronunciation…
I made an Issue for Feature requests. I’ve put OIDC in there: Feature Requests & Suggestions
Hi, you created the Korean course right? Thanks for contributing!
If you have any feature requests or suggestions please put them here: Feature Requests
There’s also a collection specific for question types: Question Types Collection
Yeah, I’ll probably go with langchain and some user-options for different LLMs.
I’ll also look into authelia
Well, yes, in a way at least. I’m not pretending to invent something never done before. Although it already has multiple features that Anki doesn’t have.
Thanks! I’m already eyeing ollama for this.
Haha. Well we can’t all actually be Duolingo and employ people to create the courses :D
Please do :). I take any help I can get.
That’s weird. Did a quick google and it does seem to be 25 USD. Last time I made one it were 25 for sure as well - but that one got banned due to inactivity D:
Fair opinion. Native Apps do have some huge advantages, but also some disadvantages.
I’ve coded both before (although way more PWAs) and with Native you also run into Platform issues as long as you don’t ship exclusively for one Platform.
PWAs have a huge advantage here since they run the same everywhere as long as the Platform has a browser which is not safari.
Is it for self-host ppl too?
In theory not an issue. I use Supabase, which you can self host as well.
You can also self host the Mistral Client, but not Gemini. However, I am planning to move away from Gemini towards a more open solution which would also support self hosting, or in-browser AI.
I am looking for OIDC, S3 and PgSQL
Since I use Supabase, it runs on PgSQL and Supabase Storage, which is just an Adapter to AWS S3 - or any S3, really. For Auth, I use Supabase Auth which uses OAuth 2.0, that’s the same as OIDC right?
Thanks. My general strategy regarding GenAI and reducing the amount of hallucinations is by not giving it the task to make stuff up, but to just work on existing text - that’s why I’m not allowing users to create content without source material.
However, LLMs will be LLMs and I’ve been testing it out a lot and found already multiple hallucinations. I built in a reporting system, although only reporting stuff works right now, not viewing reported questions.
That’s my short term plan to get a good content quality, at least. I also want to move away from Vercel AI & Gemini to a Langchain Agent system or Graph maybe, which will increase the output Quality.
Maybe in some parallel Universe this really takes off and many people work on high quality Courses together…
Yeah, good idea. It’s possible to do that with WebLLM & Langchain. Once Langchain is integrated, it’s kinda similar to the Python Version so should be do-able I think.
Would be nice for sure… 0 forks yet… but I’m hopeful :D
Yeah you’re right. I switched it to AGPL.
Thank you!. Let me know if you find out more about the issue. I’ll also keep an eye out for the cause.
Edit: I’ve opened an Issue for this on GitHub: https://github.com/cr4yfish/nouv/issues/2
Thanks, haha. I’d love develop a Native App for it too but this is a zero-budget Project (aside from the Domain). PlayStore has a one-time fee so that’s 25€ for Android + 8€/Month for the IOS AppStore just to have the App on there.
In theory, I could just have a downloadable .apk for Android to circumvent the fee but most people don’t want to install a random .apk from the internet. And I’m not developing a Native App for like 3 people excluding myself (I’m an iPhone user).
Soo, yeah that’ll probably not happen :(.
Never heard of that actually. Main difference would be my web-based approach with a cloud, which works on any device. You also don’t have to run the LLM on the same device (which seems to be a must with SillyTavern?).
The Character Cards database looks very interesting though! I will definitely look into writing an importer/connecting service.