Hacker News new | past | comments | ask | show | jobs | submit login
Exploring LoRA – Part 1: The Idea Behind Parameter Efficient Fine-Tuning (medium.com/inspiredbrilliance)
103 points by aquastorm 14 hours ago | hide | past | favorite | 8 comments





Author here. Happy to see this posted here. This is actually a series of blog posts:

1. Exploring LoRA — Part 1: The Idea Behind Parameter Efficient Fine-Tuning and LoRA: https://medium.com/inspiredbrilliance/exploring-lora-part-1-...

2. Exploring LoRA - Part 2: Analyzing LoRA through its Implementation on an MLP: https://medium.com/inspiredbrilliance/exploring-lora-part-2-...

3. Intrinsic Dimension Part 1: How Learning in Large Models Is Driven by a Few Parameters and Its Impact on Fine-Tuning https://medium.com/inspiredbrilliance/intrinsic-dimension-pa...

4. Intrinsic Dimension Part 2: Measuring the True Complexity of a Model via Random Subspace Training https://medium.com/inspiredbrilliance/intrinsic-dimension-pa...

Hope you enjoy reading the other posts too. Merry Christmas and Happy Holidays!


Thanks for sharing. This got me thinking, why is medium so used for such technical articles? Especially that lots of articles get blasted behind a paywall for me recently.

(Not to be confused with LoRa, (short for long range) which is a spread spectrum modulation technique derived from chirp spread spectrum (CSS) technology, powering technologies like LoRaWAN and Meshtastic)

This gets me every time. I expect to see something interesting and it turns to be the other one. One is a fantastic thing and the other is mediocre, pick which way round at your discretion!

really wish they had come up with another name. googling gets annoying

Contributors: They both use mixed capitalization. They have partially-overlapping audiences.

Super cool series of articles! :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: