I need to learn more about BIND DNS server to pass my job interview next Monday

Michael De Roover isc at nixmagic.com
Mon Sep 15 22:25:46 UTC 2025


On Sunday, 14 September 2025 10:13:30 Central European Summer Time Marc wrote:
> Why don't you chat a bit with AI. My impression is that AI is good at
> teaching you what you want to know. Quite often it messes up, but for
> broader knowledge acquirement it should do fine.
 
> https://copilot.microsoft.com/

I think this is a good idea, but only for the broad strokes. There is a recent 
conversation with Copilot that comes to mind.

I had a practical question to Copilot when I was introduced to this newfangled 
GPT-5 thing in MS Edge's update changelog. So I opened the chat, and thank 
heavens that I don't need to be signed into a Microsoft account to use it 
anymore. And I saw these three modes, Fast (2-3s), Think Deeper (~10s) and 
Smart (GPT-5). And so I asked what determines the complexity of the model for 
that Smart mode. Is it like my closed beta into GPT-3, where it's just a model 
change (Ada, Babbage, Curie, Davinci), where Davinci was most advanced but 
also slowest and most token-intensive?

So the answer it gave was that it all uses the same model, that GPT-5 is not a 
collection of models like the GPT-3 series were (though I don't think many 
people are aware of that for GPT-3 either). That it has internal weights that 
determine the complexity, and that it can be set for each conversational 
"turn". What it also did was immediately bring up `reasoning_effort`, an API 
parameter. So I am tempted to think of "Smart" as just "shoot my turn into the 
API with lowest complexity, and ask which value to set reasoning_effort to, 
then make the front-end send the prompt again with that level of 
reasoning_effort". Logical, simple, arguably not worthy of much hype but 
marketable, and reasonable to consider during a recession with so many 
engineers fired. But the model would not budge. The release is more advanced 
than that, and for sure it's just the both of them -- internal and external 
weights. I said thank you and moved on.

But what if I didn't know where to probe further, what to believe and what to 
discredit as e.g. hype, what if I couldn't make comparative questions that 
demand the model to make a decision on its own conflicting "beliefs"? What if I 
was still completely in the dark, and had to take the generated responses as 
gospel?

AI is helping me become more productive, and to learn about new subjects 
faster. Another example may be how I learned Arduino using it. But that took 
existing knowledge on Bash and Python, as well as understanding variable types 
and a bit of memory management (no heap but memory is counted in kB at best). 
And to program ATtiny85 chips, it made me waste so much time on burning their 
bootloader. You don't need to use avrdude for it, the IDE just has it and 
frequency presets. 2MHz which I made the AI settle on during a travel abroad, 
turned out to be invalid (only 1, 8, or 16MHz are valid, and 20MHz with 
external crystal). But I had no way to confirm that, so I spent a considerable 
amount of time on figuring out.. a dead end.

So if you want to use AI to help you be more productive, do make sure to 
cross-verify your findings, and to hold the AI accountable for it. If 
necessary, go back to an earlier "turn" that undoes all of the references to 
the incorrect statements. It won't listen to you disagree with it, if it's 
just seen that incorrect statement 5 times before in its backlog.

-- 
[Met vriendelijke groet] [Best regards]
[Michael De Roover]
---      ---      ---      ---      
[Mail] [*@nixmagic.com] [michael at at@de.roover.eu.org]
[Web] [https://michael.de.roover.eu.org]
[Forge] [https://git.nixmagic.com]
[Weather] [Antwerpen] [23:00] [13.9°C]
---      ---      ---      ---      
[0] [2025-09-15 23:56 CEST]
[~] [vim at workstation.vm.ideapad.internal]
[$] [/usr/bin/sign-mail] [>_] 
---      ---      ---      ---      





More information about the bind-users mailing list