Linux Foundation Begins Major Focus on Real Time Linux
-
@coliver said:
@scottalanmiller said:
@DustinB3403 said:
Slower overall?
or faster overall?
It would seem to be that they want the system to be faster.
No, they want it to be slower. Much slower so that they can focus on latency.
In all cases, latency (response time) comes at a cost to throughput (speed as much people define it.) This isn't Linux specific but just a general rule. Same goes for traffic, computer hardware, post office delivery, whatever.
So RTL will be processing fewer commands faster then traditional Linux?
Not faster, sooner. It's different.
-
RTL is all about lowering the response time of the system. It responds more quickly by having a faster clock tick, for example, but that comes at a cost of doing less on each tick and spending more CPU time ticking and less optimizing.
-
@scottalanmiller said:
RTL is all about lowering the response time of the system. It responds more quickly by having a faster clock tick, for example, but that comes at a cost of doing less on each tick and spending more CPU time ticking and less optimizing.
Interesting.
-
Blargh, I took 2 classes on how this works!! I wish my memory didn't suck so I could explain it better.
-
@scottalanmiller said:
@coliver said:
@scottalanmiller said:
@DustinB3403 said:
Slower overall?
or faster overall?
It would seem to be that they want the system to be faster.
No, they want it to be slower. Much slower so that they can focus on latency.
In all cases, latency (response time) comes at a cost to throughput (speed as much people define it.) This isn't Linux specific but just a general rule. Same goes for traffic, computer hardware, post office delivery, whatever.
So RTL will be processing fewer commands faster then traditional Linux?
Not faster, sooner. It's different.
I'm trying to see the difference in this context.
Let's just say on normal Linux a process takes 100 milliseconds, and on RTL it takes 50 milliseconds, isn't it both faster and sooner? 50 is faster than 100, and sooner AKA closer to the current time, because 50 milliseconds is closer to the current time than 100 milliseconds.
-
@Dashrender said:
@scottalanmiller said:
@coliver said:
@scottalanmiller said:
@DustinB3403 said:
Slower overall?
or faster overall?
It would seem to be that they want the system to be faster.
No, they want it to be slower. Much slower so that they can focus on latency.
In all cases, latency (response time) comes at a cost to throughput (speed as much people define it.) This isn't Linux specific but just a general rule. Same goes for traffic, computer hardware, post office delivery, whatever.
So RTL will be processing fewer commands faster then traditional Linux?
Not faster, sooner. It's different.
I'm trying to see the difference in this context.
Let's just say on normal Linux a process takes 100 milliseconds, and on RTL it takes 50 milliseconds, isn't it both faster and sooner? 50 is faster than 100, and sooner AKA closer to the current time, because 50 milliseconds is closer to the current time than 100 milliseconds.
From the little reading I did just now, it may happen sooner but it isn't as efficient an operation.
-
@Dashrender said:
Let's just say on normal Linux a process takes 100 milliseconds, and on RTL it takes 50 milliseconds, isn't it both faster and sooner?
That's not a good way to think of it. The processes take the same amount of time, roughly, either way. RTL would be the slower, taking maybe 110ms instead of 100ms. But the value to RTL is that it "starts sooner." Not that it takes less time to run.
-
@scottalanmiller said:
@Dashrender said:
Let's just say on normal Linux a process takes 100 milliseconds, and on RTL it takes 50 milliseconds, isn't it both faster and sooner?
That's not a good way to think of it. The processes take the same amount of time, roughly, either way. RTL would be the slower, taking maybe 110ms instead of 100ms. But the value to RTL is that it "starts sooner." Not that it takes less time to run.
Awww - OK that changes things. So the old way took 100 ms, but didn't start for 200 ms, the RTL takes 100 ms, but starts in 20 ms, or some noticeable time less than the old way.
And this is where the inefficiencies are brought in. Now the resources have to be waiting, available to start on the job with the least amount of queueing/lag/wait time before starting the process.
yes? -
@Dashrender said:
Awww - OK that changes things. So the old way took 100 ms, but didn't start for 200 ms, the RTL takes 100 ms, but starts in 20 ms, or some noticeable time less than the old way.
Much better. RTL makes things start much faster, but can run far fewer things overall.
-
@Dashrender Yes, this is kinda crummy but picture your job is to look at a screen, and when you see the light turn on, press a button (RTC) vs your job is to wait for a timer, check if the light is on, then push the button if it is.
-
@Dashrender said:
And this is where the inefficiencies are brought in. Now the resources have to be waiting, available to start on the job with the least amount of queueing/lag/wait time before starting the process.
Correct. If the CPU is not both idle AND actively checking for things to do all of the time, you get delays. So you need idle CPUs with busy clocks.
-
@MattSpeller said:
@Dashrender Yes, this is kinda crummy but picture your job is to look at a screen, and when you see the light turn on, press a button (RTC) vs your job is to wait for a timer, check if the light is on, then push the button if it is.
To potentially push the analogy beyond it's usefullness:
While you're waiting to see if the light will turn on (RTC) you're using your attention directly vs with a timer, you can be more organized to monitor lots of things, albiet with a delay.
-
My dad @SonshineAcres used to do hard core real time systems (no operating system.) They did RT systems so sensitive that they could not have code to log what was happening as the logging would break the real time latency needs. So to see what performance issues were happening they would need to put an oscilloscope on the CPU as different CPU commands produce different voltages. Since the CPU was cycling commands you could set the oscilloscope to repeat on a timer (frequency) which would show a CPU voltage pattern and you could tell if the command sequence had been chanced.
-
@scottalanmiller I did similar in college with QNX RTOS (later made famous by RIM) and (ancient / decrepit by today's standards) PIC microprocessors.
Edit: for those of you who'd remember I'll add this to start a flame war: PIC's >> AVR
-
@scottalanmiller said:
My dad @SonshineAcres used to do hard core real time systems (no operating system.) They did RT systems so sensitive that they could not have code to log what was happening as the logging would break the real time latency needs. So to see what performance issues were happening they would need to put an oscilloscope on the CPU as different CPU commands produce different voltages. Since the CPU was cycling commands you could set the oscilloscope to repeat on a timer (frequency) which would show a CPU voltage pattern and you could tell if the command sequence had been chanced.
That seems old school, I'm assuming today that a hand off to a second processor could handle the monitoring such as to not affect the RT function?
-
@MattSpeller said:
@scottalanmiller I did similar in college with QNX RTOS (later made famous by RIM) and (ancient / decrepit by today's standards) PIC microprocessors.
Been a while but I ran QNX back in the day too.
-
@Dashrender said:
@scottalanmiller said:
My dad @SonshineAcres used to do hard core real time systems (no operating system.) They did RT systems so sensitive that they could not have code to log what was happening as the logging would break the real time latency needs. So to see what performance issues were happening they would need to put an oscilloscope on the CPU as different CPU commands produce different voltages. Since the CPU was cycling commands you could set the oscilloscope to repeat on a timer (frequency) which would show a CPU voltage pattern and you could tell if the command sequence had been chanced.
That seems old school, I'm assuming today that a hand off to a second processor could handle the monitoring such as to not affect the RT function?
- Second processor cannot do what you are thinking, though. It has no way to talk to the first processor like that without interrupting it. So while CPUs have tons more power today, they would introduce the overhead by talking to each other still.
-
@scottalanmiller said:
@Dashrender said:
@scottalanmiller said:
My dad @SonshineAcres used to do hard core real time systems (no operating system.) They did RT systems so sensitive that they could not have code to log what was happening as the logging would break the real time latency needs. So to see what performance issues were happening they would need to put an oscilloscope on the CPU as different CPU commands produce different voltages. Since the CPU was cycling commands you could set the oscilloscope to repeat on a timer (frequency) which would show a CPU voltage pattern and you could tell if the command sequence had been chanced.
That seems old school, I'm assuming today that a hand off to a second processor could handle the monitoring such as to not affect the RT function?
- Second processor cannot do what you are thinking, though. It has no way to talk to the first processor like that without interrupting it. So while CPUs have tons more power today, they would introduce the overhead by talking to each other still.
wow, even that single cycle would be to much, eh?
-
@Dashrender said:
@scottalanmiller said:
@Dashrender said:
@scottalanmiller said:
My dad @SonshineAcres used to do hard core real time systems (no operating system.) They did RT systems so sensitive that they could not have code to log what was happening as the logging would break the real time latency needs. So to see what performance issues were happening they would need to put an oscilloscope on the CPU as different CPU commands produce different voltages. Since the CPU was cycling commands you could set the oscilloscope to repeat on a timer (frequency) which would show a CPU voltage pattern and you could tell if the command sequence had been chanced.
That seems old school, I'm assuming today that a hand off to a second processor could handle the monitoring such as to not affect the RT function?
- Second processor cannot do what you are thinking, though. It has no way to talk to the first processor like that without interrupting it. So while CPUs have tons more power today, they would introduce the overhead by talking to each other still.
wow, even that single cycle would be to much, eh?
Depends, when you are pushed to the limits of the CPU, more than it can handle is still... more. But it is a lot more than a cycle too. Doing something like logging or relaying a bit of information is a lot of cycles. If you are trying to use zero extra, this would be a lot extra. All relative.
-