Pagine più lunghe

Vengono mostrati sotto 50 risultati dal 51 al 100.

Vedi ( | ) (20 | 50 | 100 | 250 | 500).

  1. (cron) ‎Utilizzare le API di OpenAI ‎[3 350 byte]
  2. (cron) ‎Adversarial Filtering (AF) ‎[3 346 byte]
  3. (cron) ‎SPLADE ‎[3 338 byte]
  4. (cron) ‎Attention Is All You Need (2017) ‎[3 271 byte]
  5. (cron) ‎CAMEL (Agent Framework) ‎[3 111 byte]
  6. (cron) ‎Proximal Policy Optimization (PPO) ‎[3 057 byte]
  7. (cron) ‎Neural Information Retrieval ‎[3 041 byte]
  8. (cron) ‎Llama ‎[3 023 byte]
  9. (cron) ‎Rete Generativa Avversaria ‎[2 969 byte]
  10. (cron) ‎Temperatura (Apprendimento Automatico) ‎[2 940 byte]
  11. (cron) ‎Test-Time Compute Scaling ‎[2 857 byte]
  12. (cron) ‎Is Power-Seeking AI an Existential Risk? ‎[2 833 byte]
  13. (cron) ‎Dropout (Reti Neurali) ‎[2 818 byte]
  14. (cron) ‎Function Calling ‎[2 805 byte]
  15. (cron) ‎Magenta ‎[2 791 byte]
  16. (cron) ‎Transformer (Architettura di Deep Learning) ‎[2 777 byte]
  17. (cron) ‎FANNG: Fast Approximate Nearest Neighbour Graphs ‎[2 771 byte]
  18. (cron) ‎BABILong ‎[2 724 byte]
  19. (cron) ‎A Theory for Emergence of Complex Skills in Language Models (2023) ‎[2 715 byte]
  20. (cron) ‎BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension ‎[2 712 byte]
  21. (cron) ‎Dialogue State Tracking ‎[2 691 byte]
  22. (cron) ‎Convinzione (Belief) ‎[2 659 byte]
  23. (cron) ‎General Language Understanding Evaluation (GLUE) ‎[2 640 byte]
  24. (cron) ‎Adversarial Endings ‎[2 632 byte]
  25. (cron) ‎Mistral ‎[2 526 byte]
  26. (cron) ‎LoRA ‎[2 513 byte]
  27. (cron) ‎Apprendimento Auto-Supervisionato ‎[2 495 byte]
  28. (cron) ‎Neural Machine Translation by Jointly Learning to Align and Translate ‎[2 495 byte]
  29. (cron) ‎Reti Neurali Convoluzionali (CNN) ‎[2 445 byte]
  30. (cron) ‎Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models ‎[2 411 byte]
  31. (cron) ‎Hypernetworks ‎[2 326 byte]
  32. (cron) ‎Natural language inference (NLI) ‎[2 313 byte]
  33. (cron) ‎Alpaca ‎[2 311 byte]
  34. (cron) ‎What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization? ‎[2 291 byte]
  35. (cron) ‎Fine-tuning ‎[2 286 byte]
  36. (cron) ‎Chain of Thought Prompting Elicits Reasoning in Large Language Models ‎[2 255 byte]
  37. (cron) ‎Training language models to follow instructions with human feedback ‎[2 255 byte]
  38. (cron) ‎Softmax ‎[2 247 byte]
  39. (cron) ‎Language Models are Few-Shot Learners ‎[2 180 byte]
  40. (cron) ‎Training Compute-Optimal Large Language Models ‎[2 172 byte]
  41. (cron) ‎Povertà dello stimolo (Linguistica) ‎[2 125 byte]
  42. (cron) ‎Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation ‎[2 073 byte]
  43. (cron) ‎Metodi di Decoding ‎[2 057 byte]
  44. (cron) ‎Tensore (Informatica) ‎[2 012 byte]
  45. (cron) ‎Are Large Language Models Geospatially Knowledgeable? ‎[2 003 byte]
  46. (cron) ‎Going Deeper with Convolutions ‎[1 980 byte]
  47. (cron) ‎Scaling Rectified Flow Transformers for High-Resolution Image Synthesis ‎[1 976 byte]
  48. (cron) ‎None ‎[1 954 byte]
  49. (cron) ‎Quantizzazione ‎[1 949 byte]
  50. (cron) ‎Technical Report, Palm 2 ‎[1 916 byte]

Vedi ( | ) (20 | 50 | 100 | 250 | 500).