#transformer-architecture

[ follow ]
#large-language-models
Hackernoon
1 month ago
Data science

Textbooks Are All You Need: Abstract and Introduction | HackerNoon

phi-1 is a compact 1.3B parameter language model for code, achieving notable accuracy despite its smaller size. [ more ]
Medium
4 months ago
Data science

Deploying Large Language Models (LLMs) on Google Cloud Platform

Large language models (LLMs), like ChatGPT, are rapidly gaining popularity due to their conversational abilities and natural language understanding. [ more ]
Hackernoon
1 month ago
Data science

Textbooks Are All You Need: Abstract and Introduction | HackerNoon

phi-1 is a compact 1.3B parameter language model for code, achieving notable accuracy despite its smaller size. [ more ]
Medium
4 months ago
Data science

Deploying Large Language Models (LLMs) on Google Cloud Platform

Large language models (LLMs), like ChatGPT, are rapidly gaining popularity due to their conversational abilities and natural language understanding. [ more ]
morelarge-language-models
#ai
faun.pub
2 months ago
Artificial intelligence

Leveraging AI for Kubernetes Troubleshooting via K8sGPT

AI, specifically K8sGPT, can be used for managing Kubernetes efficiently. [ more ]
Medium
2 months ago
Artificial intelligence

Leveraging AI for Kubernetes Troubleshooting via K8sGPT

AI can help manage Kubernetes through K8sGPT, based on Generative Pre-trained Transformer model. [ more ]
faun.pub
2 months ago
Artificial intelligence

Leveraging AI for Kubernetes Troubleshooting via K8sGPT

AI, specifically K8sGPT, can be used for managing Kubernetes efficiently. [ more ]
Medium
2 months ago
Artificial intelligence

Leveraging AI for Kubernetes Troubleshooting via K8sGPT

AI can help manage Kubernetes through K8sGPT, based on Generative Pre-trained Transformer model. [ more ]
moreai
InfoQ
3 months ago
Data science

Meta Open-Sources MEGALODON LLM for Efficient Long Sequence Modeling

MEGALODON, a large language model (LLM), outperforms Llama 2 model on various benchmarks with linear computational complexity and unlimited context length. [ more ]
[ Load more ]