ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Linux Foundation Begins Major Focus on Real Time Linux

    News
    linux linux foundation real time linux
    6
    26
    4.1k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller
      last edited by

      RTL is all about lowering the response time of the system. It responds more quickly by having a faster clock tick, for example, but that comes at a cost of doing less on each tick and spending more CPU time ticking and less optimizing.

      coliverC 1 Reply Last reply Reply Quote 0
      • coliverC
        coliver @scottalanmiller
        last edited by

        @scottalanmiller said:

        RTL is all about lowering the response time of the system. It responds more quickly by having a faster clock tick, for example, but that comes at a cost of doing less on each tick and spending more CPU time ticking and less optimizing.

        Interesting.

        1 Reply Last reply Reply Quote 0
        • MattSpellerM
          MattSpeller
          last edited by

          Blargh, I took 2 classes on how this works!! I wish my memory didn't suck so I could explain it better.

          1 Reply Last reply Reply Quote 0
          • DashrenderD
            Dashrender @scottalanmiller
            last edited by

            @scottalanmiller said:

            @coliver said:

            @scottalanmiller said:

            @DustinB3403 said:

            Slower overall?

            or faster overall?

            It would seem to be that they want the system to be faster.

            No, they want it to be slower. Much slower so that they can focus on latency.

            In all cases, latency (response time) comes at a cost to throughput (speed as much people define it.) This isn't Linux specific but just a general rule. Same goes for traffic, computer hardware, post office delivery, whatever.

            So RTL will be processing fewer commands faster then traditional Linux?

            Not faster, sooner. It's different.

            I'm trying to see the difference in this context.

            Let's just say on normal Linux a process takes 100 milliseconds, and on RTL it takes 50 milliseconds, isn't it both faster and sooner? 50 is faster than 100, and sooner AKA closer to the current time, because 50 milliseconds is closer to the current time than 100 milliseconds.

            coliverC scottalanmillerS 2 Replies Last reply Reply Quote 0
            • coliverC
              coliver @Dashrender
              last edited by

              @Dashrender said:

              @scottalanmiller said:

              @coliver said:

              @scottalanmiller said:

              @DustinB3403 said:

              Slower overall?

              or faster overall?

              It would seem to be that they want the system to be faster.

              No, they want it to be slower. Much slower so that they can focus on latency.

              In all cases, latency (response time) comes at a cost to throughput (speed as much people define it.) This isn't Linux specific but just a general rule. Same goes for traffic, computer hardware, post office delivery, whatever.

              So RTL will be processing fewer commands faster then traditional Linux?

              Not faster, sooner. It's different.

              I'm trying to see the difference in this context.

              Let's just say on normal Linux a process takes 100 milliseconds, and on RTL it takes 50 milliseconds, isn't it both faster and sooner? 50 is faster than 100, and sooner AKA closer to the current time, because 50 milliseconds is closer to the current time than 100 milliseconds.

              From the little reading I did just now, it may happen sooner but it isn't as efficient an operation.

              1 Reply Last reply Reply Quote 1
              • scottalanmillerS
                scottalanmiller @Dashrender
                last edited by

                @Dashrender said:

                Let's just say on normal Linux a process takes 100 milliseconds, and on RTL it takes 50 milliseconds, isn't it both faster and sooner?

                That's not a good way to think of it. The processes take the same amount of time, roughly, either way. RTL would be the slower, taking maybe 110ms instead of 100ms. But the value to RTL is that it "starts sooner." Not that it takes less time to run.

                DashrenderD 1 Reply Last reply Reply Quote 1
                • DashrenderD
                  Dashrender @scottalanmiller
                  last edited by

                  @scottalanmiller said:

                  @Dashrender said:

                  Let's just say on normal Linux a process takes 100 milliseconds, and on RTL it takes 50 milliseconds, isn't it both faster and sooner?

                  That's not a good way to think of it. The processes take the same amount of time, roughly, either way. RTL would be the slower, taking maybe 110ms instead of 100ms. But the value to RTL is that it "starts sooner." Not that it takes less time to run.

                  Awww - OK that changes things. So the old way took 100 ms, but didn't start for 200 ms, the RTL takes 100 ms, but starts in 20 ms, or some noticeable time less than the old way.

                  And this is where the inefficiencies are brought in. Now the resources have to be waiting, available to start on the job with the least amount of queueing/lag/wait time before starting the process.
                  yes?

                  scottalanmillerS MattSpellerM 3 Replies Last reply Reply Quote 0
                  • scottalanmillerS
                    scottalanmiller @Dashrender
                    last edited by

                    @Dashrender said:

                    Awww - OK that changes things. So the old way took 100 ms, but didn't start for 200 ms, the RTL takes 100 ms, but starts in 20 ms, or some noticeable time less than the old way.

                    Much better. RTL makes things start much faster, but can run far fewer things overall.

                    1 Reply Last reply Reply Quote 0
                    • MattSpellerM
                      MattSpeller @Dashrender
                      last edited by

                      @Dashrender Yes, this is kinda crummy but picture your job is to look at a screen, and when you see the light turn on, press a button (RTC) vs your job is to wait for a timer, check if the light is on, then push the button if it is.

                      MattSpellerM 1 Reply Last reply Reply Quote 1
                      • scottalanmillerS
                        scottalanmiller @Dashrender
                        last edited by

                        @Dashrender said:

                        And this is where the inefficiencies are brought in. Now the resources have to be waiting, available to start on the job with the least amount of queueing/lag/wait time before starting the process.

                        Correct. If the CPU is not both idle AND actively checking for things to do all of the time, you get delays. So you need idle CPUs with busy clocks.

                        1 Reply Last reply Reply Quote 0
                        • MattSpellerM
                          MattSpeller @MattSpeller
                          last edited by

                          @MattSpeller said:

                          @Dashrender Yes, this is kinda crummy but picture your job is to look at a screen, and when you see the light turn on, press a button (RTC) vs your job is to wait for a timer, check if the light is on, then push the button if it is.

                          To potentially push the analogy beyond it's usefullness:

                          While you're waiting to see if the light will turn on (RTC) you're using your attention directly vs with a timer, you can be more organized to monitor lots of things, albiet with a delay.

                          1 Reply Last reply Reply Quote 1
                          • scottalanmillerS
                            scottalanmiller
                            last edited by

                            My dad @SonshineAcres used to do hard core real time systems (no operating system.) They did RT systems so sensitive that they could not have code to log what was happening as the logging would break the real time latency needs. So to see what performance issues were happening they would need to put an oscilloscope on the CPU as different CPU commands produce different voltages. Since the CPU was cycling commands you could set the oscilloscope to repeat on a timer (frequency) which would show a CPU voltage pattern and you could tell if the command sequence had been chanced.

                            MattSpellerM DashrenderD 2 Replies Last reply Reply Quote 1
                            • MattSpellerM
                              MattSpeller @scottalanmiller
                              last edited by MattSpeller

                              @scottalanmiller I did similar in college with QNX RTOS (later made famous by RIM) and (ancient / decrepit by today's standards) PIC microprocessors.

                              Edit: for those of you who'd remember I'll add this to start a flame war: PIC's >> AVR

                              scottalanmillerS 1 Reply Last reply Reply Quote 0
                              • DashrenderD
                                Dashrender @scottalanmiller
                                last edited by

                                @scottalanmiller said:

                                My dad @SonshineAcres used to do hard core real time systems (no operating system.) They did RT systems so sensitive that they could not have code to log what was happening as the logging would break the real time latency needs. So to see what performance issues were happening they would need to put an oscilloscope on the CPU as different CPU commands produce different voltages. Since the CPU was cycling commands you could set the oscilloscope to repeat on a timer (frequency) which would show a CPU voltage pattern and you could tell if the command sequence had been chanced.

                                That seems old school, I'm assuming today that a hand off to a second processor could handle the monitoring such as to not affect the RT function?

                                scottalanmillerS 1 Reply Last reply Reply Quote 0
                                • scottalanmillerS
                                  scottalanmiller @MattSpeller
                                  last edited by

                                  @MattSpeller said:

                                  @scottalanmiller I did similar in college with QNX RTOS (later made famous by RIM) and (ancient / decrepit by today's standards) PIC microprocessors.

                                  Been a while but I ran QNX back in the day too.

                                  1 Reply Last reply Reply Quote 0
                                  • scottalanmillerS
                                    scottalanmiller @Dashrender
                                    last edited by

                                    @Dashrender said:

                                    @scottalanmiller said:

                                    My dad @SonshineAcres used to do hard core real time systems (no operating system.) They did RT systems so sensitive that they could not have code to log what was happening as the logging would break the real time latency needs. So to see what performance issues were happening they would need to put an oscilloscope on the CPU as different CPU commands produce different voltages. Since the CPU was cycling commands you could set the oscilloscope to repeat on a timer (frequency) which would show a CPU voltage pattern and you could tell if the command sequence had been chanced.

                                    That seems old school, I'm assuming today that a hand off to a second processor could handle the monitoring such as to not affect the RT function?

                                    1. Second processor cannot do what you are thinking, though. It has no way to talk to the first processor like that without interrupting it. So while CPUs have tons more power today, they would introduce the overhead by talking to each other still.
                                    DashrenderD 1 Reply Last reply Reply Quote 0
                                    • DashrenderD
                                      Dashrender @scottalanmiller
                                      last edited by

                                      @scottalanmiller said:

                                      @Dashrender said:

                                      @scottalanmiller said:

                                      My dad @SonshineAcres used to do hard core real time systems (no operating system.) They did RT systems so sensitive that they could not have code to log what was happening as the logging would break the real time latency needs. So to see what performance issues were happening they would need to put an oscilloscope on the CPU as different CPU commands produce different voltages. Since the CPU was cycling commands you could set the oscilloscope to repeat on a timer (frequency) which would show a CPU voltage pattern and you could tell if the command sequence had been chanced.

                                      That seems old school, I'm assuming today that a hand off to a second processor could handle the monitoring such as to not affect the RT function?

                                      1. Second processor cannot do what you are thinking, though. It has no way to talk to the first processor like that without interrupting it. So while CPUs have tons more power today, they would introduce the overhead by talking to each other still.

                                      wow, even that single cycle would be to much, eh?

                                      scottalanmillerS 1 Reply Last reply Reply Quote 0
                                      • scottalanmillerS
                                        scottalanmiller @Dashrender
                                        last edited by

                                        @Dashrender said:

                                        @scottalanmiller said:

                                        @Dashrender said:

                                        @scottalanmiller said:

                                        My dad @SonshineAcres used to do hard core real time systems (no operating system.) They did RT systems so sensitive that they could not have code to log what was happening as the logging would break the real time latency needs. So to see what performance issues were happening they would need to put an oscilloscope on the CPU as different CPU commands produce different voltages. Since the CPU was cycling commands you could set the oscilloscope to repeat on a timer (frequency) which would show a CPU voltage pattern and you could tell if the command sequence had been chanced.

                                        That seems old school, I'm assuming today that a hand off to a second processor could handle the monitoring such as to not affect the RT function?

                                        1. Second processor cannot do what you are thinking, though. It has no way to talk to the first processor like that without interrupting it. So while CPUs have tons more power today, they would introduce the overhead by talking to each other still.

                                        wow, even that single cycle would be to much, eh?

                                        Depends, when you are pushed to the limits of the CPU, more than it can handle is still... more. But it is a lot more than a cycle too. Doing something like logging or relaying a bit of information is a lot of cycles. If you are trying to use zero extra, this would be a lot extra. All relative.

                                        1 Reply Last reply Reply Quote 0
                                        • scottalanmillerS
                                          scottalanmiller
                                          last edited by

                                          Here it is: Understanding the Tradeoffs in Latency and Throughput

                                          1 Reply Last reply Reply Quote 1
                                          • 1
                                          • 2
                                          • 1 / 2
                                          • First post
                                            Last post