• @SigHunter@lemmy.kde.social
      link
      fedilink
      English
      710 months ago

      Bsck in the day, the rule was mbit (megabit) for data in transfer (network speed) and MB (megabyte) for data at rest, like on HDDs

        • rhys the mediocre bald man
          link
          fedilink
          110 months ago

          @Moneo @SigHunter Networking came to be when there were lots of different implementations of a ‘byte’. The PDP-10 was prevalent at the time the internet was being developed for example, which supported variable byte lengths of up to 36-bits per byte.

          Network protocols had to support every device regardless of its byte size, so protocol specifications settled on bits as the lowest common unit size, while referring to 8-bit fields as ‘octets’ before 8-bit became the de facto standard byte length.

        • @bitwaba@lemmy.world
          link
          fedilink
          English
          110 months ago

          The real answer?

          Data is transmitted in packets. Each packet has a packet header, and a packet payload. The total data transmitted is the header + payload.

          If you’re transmitting smaller packet sizes, it means your header is a larger percentage of the total packet size.

          Measuring in megabits is the ISP telling you “look, your connection is good for X amount of data. How you choose to use that data is up to you. If you want more of it going to your packet headers instead of your payload, fine.” A bit is a bit is a bit to your ISP.

    • @lud@lemm.ee
      link
      fedilink
      English
      310 months ago

      The best format imo is MB/s and Mbit/s

      It avoids all confusion.