ClipPhrase
EN

models can

825
клипов найдено
3
видео

Примеры в видео

YouTubeDazed288K просм. · 2025-09-10
Alex Consani on how to walk, viral TikToks, Leo power and more | The dA-Zed Guide to Being
models can be like whatever gender they”
>> I'd like to accomplish a goal wherewant.
Играть с 5:12
YouTubeThe Diary Of A CEO679K просм. · 2026-02-23
Uber CEO: I Have To Be Honest, AI Will Replace 9.4 Million Jobs At Uber!
“beings and when the models can learn”
but we're learning real time as humanreal time
Играть с 83:30
YouTubeLex Fridman758K просм. · 2026-01-31
State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI | Lex Fridman Podcast #490
“kickstarted a major change in what the models can do and how people use them.”
let the optimization go on to much bigger scales, which kind of- What kind of domains is RLVR amenable to?
Играть с 100:20
YouTubeLex Fridman758K просм. · 2026-01-31
State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI | Lex Fridman Podcast #490
“these kind of more open-ended domains so the models can learn a lot more.”
the attention is, where they're trying to push this set of methods into- I think that's called reinforcement learning with AI feedback, right?
Играть с 101:07
YouTubeLex Fridman758K просм. · 2026-01-31
State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI | Lex Fridman Podcast #490
“social and largely in the data in ways that AI models can't”
singularity idea, whereas I think research is messy,process. But what we do have today is really powerful and these
Играть с 184:30
YouTubeLex Fridman758K просм. · 2026-01-31
State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI | Lex Fridman Podcast #490
“we talked about knowledge. Essentially, there's a ratio where big models can absorb”
models, you need more compute, because we talked about having more parameters andmore from data, and then you get more benefit out of this.
Играть с 70:14
YouTubeLex Fridman758K просм. · 2026-01-31
State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI | Lex Fridman Podcast #490
“at some point you reach a limit, because small models can only do so much.”
attention mechanism. You get a solid understanding of how things work, butThe problem with learning about LLMs at scale is that it's
Играть с 119:06
YouTubeLex Fridman758K просм. · 2026-01-31
State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI | Lex Fridman Podcast #490
“have today capable of doing? I think the language models can solve a lot”
ASI, Artificial Superintelligence, and what are the language models that weof tasks, but a key milestone among the AI community is
Играть с 159:13
YouTubeLex Fridman758K просм. · 2026-01-31
State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI | Lex Fridman Podcast #490
“we have foundation models we can specialize. That's like the”
is there a threshold for AGI? I think the real cool thing here is thatbreakthrough. Right now, I think we are not there yet because, first,
Играть с 195:29
YouTubeLex Fridman758K просм. · 2026-01-31
State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI | Lex Fridman Podcast #490
“Smaller models from researchers can be something like five to 10 trillion.”
Pre-training dataset size is measured in trillions of tokens.researchers can be something like five to 10 trillion. Um,
Играть с 69:08
"models can" — примеры в реальных видео | ClipPhrase