Ash Space
GitHubHuggingFace
  • Contents
  • 🥑Resume / CV
    • Reseume / CV
  • 📄Paper Review
    • Paper List
      • [2017] Attention is all you need
      • [2023] CoVe : Chain of Verification Reduces Hallucination in Large Language Models
      • [2024] RAG Survey : A Survey on Retrieval-Augmented Text Generation for Large Language Models
      • [2023] Interleaving Retrieval with Chain-of-Thought for Knowledge-Intensive Multi-Step Questions
      • [2024] Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models
      • [2020] ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT
      • [2024] Retrieval Augmented Generation (RAG) and Beyond
      • [2009] Reciprocal Rank Fusion outperforms Condorcet and individual Rank Learning Methods
      • [2024] Don't Do RAG : When Cache-Augmented Generation is All you Need for Knowledge Tasks
      • [2024] Text2SQL is Not Enough : Unifying AI and Database with TAG
  • 🗂️Research Article
    • Reference List
      • Dataset
      • LLM
      • Prompt Engineering
      • LLMops
      • RAG & Agent
      • Etc
    • Compounded AI System : The Shift from Models to Compound AI Systems
    • LLM과 Grounding
    • Essence of RAG
    • How to reduce Hallucinations
    • Golden Gate Claude Review
    • Editorial Thinking
    • Embedding을 평가하는 방법
    • 나야, Chunk
    • 당신.. Chunking이 뭔지 정확히 알아..?
    • 그래서 제일 좋은 Chunking이 뭔데?
    • 웅장한 대결 AI Agent와 Agentic AI
    • UV써도 괜찮아~ 딩딩딩딩딩
    • 아무도 RAG 평가 셋 만드는 것에 관심가지지 않아~
    • Linguistic Prompts
    • Chroma야, Chunking 평가를 어떻게 한다고?
    • Generations Never Easy
    • Model Context Protocol
    • Chill칠치 못한 Function Calling
    • RAG 평가지표 정복하기
    • LLM Quantization 방법론 알아보기
    • LLM은 더우면 헛소리를 해?
    • Text2SQL 넌 내꺼야!
  • 🏵️Conference
    • 일할맛 판교 3월 세미나
    • LangChainOpenTutorial를 진행하며
    • Talk: Prompt and Language The Science of Prompts 후기
    • 2024년 회고
    • 제 7회 Kako Tech Meet Up 후기
    • Moducon 2023 행사 후기
    • GDGXGDSC DevFest Happy Career 행사 후기
    • 모두를 위한 한국어 오픈액세스 언어모델 못다한 이야기 (feat. 모두연) #1
    • 모두를 위한 한국어 오픈액세스 언어모델 못다한 이야기 (feat. 모두연) #2
    • 맨땅에서 구축해본 개인화시스템 구축기 Session 후기
  • ♟️Basic
    • 00 Introduction
    • 01-1 LLM 지도
    • 01-2 LLM의 중추 트랜스포머 아키텍처 살펴보기
Powered by GitBook
On this page
  1. Research Article
  2. Reference List

LLMops

다양한 LLMops에 관한 자료를 수집합니다.

Last updated 4 months ago

📜 Reference

  • ScatterLab - A/B 테스트를 위한 구조설계

  • ScatterLab - ArgoCD와 모델 서빙

  • HyperCLOVA 서빙 프레임워크 선정

  • NSML : 분산학습 플랫폼의 스케줄링 요구사항

  • vLLM OpenSource

  • Ray OpenSource

  • 로컬LLM에서 K8sGPT로 쿠버네티스 AIOps 실행하기

  • WandDB

  • Dify OpenSource

  • LangFlow OpenSource

  • LangGraph Quick Start

  • 최대 24배 빠른 vLLM의 비밀 파헤치기

  • Langchain Tutorial (Korean)

  • Langchain OpenTutorial (Global)

https://tech.scatterlab.co.kr/serving-architecture-1/
https://tech.scatterlab.co.kr/serving-architecture-2/
https://engineering.clova.ai/posts/2022/01/hyperclova-part-1
https://engineering.clova.ai/posts/2022/08/nsml-scheduler-part-1
https://blog.vllm.ai/2023/06/20/vllm.html
https://www.ray.io/
https://yozm.wishket.com/magazine/detail/2515/
https://docs.wandb.ai/ko/guides
https://github.com/langgenius/dify
https://github.com/langflow-ai/langflow
https://langchain-ai.github.io/langgraph/tutorials/introduction/
https://tech.scatterlab.co.kr/vllm-implementation-details/
https://wikidocs.net/book/14314
https://github.com/LangChain-OpenTutorial
🗂️
Page cover image