Follow
Xavier Martinet
Xavier Martinet
Meta
Verified email at meta.com
Title
Cited by
Cited by
Year
Llama: Open and efficient foundation language models
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
arXiv preprint arXiv:2302.13971, 2023
122282023
Llama 2: Open foundation and fine-tuned chat models
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288, 2023
118252023
The llama 3 herd of models
A Dubey, A Jauhri, A Pandey, A Kadian, A Al-Dahle, A Letman, A Mathur, ...
arXiv preprint arXiv:2407.21783, 2024
19852024
LLaMA: open and efficient foundation language models. arXiv
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
arXiv preprint arXiv:2302.13971, 2023
2682023
Llama: Open and efficient foundation language models. arXiv 2023
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
arXiv preprint arXiv:2302.13971 10, 2023
212*2023
Llama 2: open foundation and fine-tuned chat models. arXiv
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288, 2023
1442023
Llama 2: Open foundation and fine-tuned chat models. arXiv 2023
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288, 0
136
Llama 2: Open foundation and fine-tuned chat models, 2023b
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
URL https://arxiv. org/abs/2307.09288, 2023
1312023
Hypertree proof search for neural theorem proving
G Lample, T Lacroix, MA Lachaux, A Rodriguez, A Hayat, T Lavril, ...
Advances in neural information processing systems 35, 26337-26349, 2022
1252022
LLaMA: open and efficient foundation language models, 2023 [J]
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
URL https://arxiv. org/abs/2302.13971, 2023
1142023
Llama 2: open foundation and fine-tuned chat models. CoRR abs/2307.09288 (2023)
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288 10, 2023
612023
Polygames: Improved zero learning
T Cazenave, YC Chen, GW Chen, SY Chen, XD Chiu, J Dehos, M Elsa, ...
ICGA Journal 42 (4), 244-256, 2021
602021
The llama 3 herd of models
A Grattafiori, A Dubey, A Jauhri, A Pandey, A Kadian, A Al-Dahle, ...
arXiv e-prints, arXiv: 2407.21783, 2024
372024
Timo-401 thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open 402 and efficient foundation language models
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux
arXiv preprint arXiv:2302.13971 403, 2023
192023
Llama: Open and efficient foundation language models, CoRR abs/2302.13971 (2023). URL: https://doi. org/10.48550/arXiv. 230 2.13971. doi: 10.48550/arXiv. 2302.13971
H Touvron, T Lavril, G Izacard, X Martinet, M Lachaux, T Lacroix, ...
arXiv preprint arXiv:2302.13971, 0
16
Llama 2: Open foundation and fine-tuned chat models. arXiv [Preprint](2023)
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
URL https://arxiv. org/abs/2307 9288, 12, 0
12
Worldsense: A synthetic benchmark for grounded reasoning in large language models
Y Benchekroun, M Dervishi, M Ibrahim, JB Gaya, X Martinet, G Mialon, ...
arXiv preprint arXiv:2311.15930, 2023
72023
The system can't perform the operation now. Try again later.
Articles 1–17