ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. adam.ierymenko
    3. Posts
    A
    • Profile
    • Following 0
    • Followers 1
    • Topics 1
    • Posts 134
    • Best 78
    • Controversial 0
    • Groups 0

    Posts made by adam.ierymenko

    • RE: ZeroTier: Gateway device?

      We have our own community site at https://www.zerotier.com/community/

      I'd look into bridging options in Linux. I don't remember off the top of my head, but I know it has some options around enabling bridging and bridge behavior. There's several.

      Does tcpdump on the remote end show you anything?

      posted in IT Discussion
      A
      adam.ierymenko
    • RE: ZeroTier: Gateway device?

      @FATeknollogee Two ways:

      1. Run ZeroTier on the device(s) themselves. Right now this varies in terms of do-ability, but we're planning more in the future here.

      2. Bridge them with an auxiliary device.

      Bridging is a subject that needs more documentation, but it's not terribly hard to do.

      Let's say you have a ZeroTier network with the IPv4 subnet 10.10.10.0/24 and you have ten devices you want to bridge in.

      The simplest thing would be to:

      1. Edit the network's IP auto-assignment configuration and reduce the assignment range to reserve, say, everything above 200 for non-ZT devices.

      2. Set up ZT on a Linux machine such as a Raspberry Pi or a Linux VM on your network. (If it's a VM, be sure the hypervisor allows bridging. Some like VMWare have a setting for this.) Designate this device as an "active bridge" at the network controller level, which means it's allowed to bridge other things in. (The active bridge setting also alters its behavior in terms of multicast a bit. Bridges use slightly more bandwidth since they see more multicast traffic.)

      3. Create a Linux bridge device (instructions differ by Linux distro) br0 and add zt0 and eth0 (or wlan0, etc.) to it.

      4. Assign your phones and other devices IPs like 10.10.10.201, 10.10.10.202 manually and attach them to the network that is bridged to ZeroTier via the ZT bridge you configured above.

      ZeroTier emulates L2 Ethernet, so what you've done is created a single Ethernet network consisting of a physical wired or WiFi network bridged to a virtual ZeroTier network by a bridge device. The bridge device "glues" them together, passing packets back and forth and such. Linux's bridging driver is very good and handles a lot of edge cases like MTU mismatch, etc., and we've found that it works pretty good in practice.

      Now a ZT device with IP 10.10.10.100 should be able to ping 10.10.10.201, etc.

      Raspberry Pi's work great for this kind of thing. They're great for cheap DIY low-power network devices like bridges, routers, NAS boxes (connect a USB drive), etc.

      posted in IT Discussion
      A
      adam.ierymenko
    • RE: If LAN is legacy, what is the UN-legacy...?

      @scottalanmiller I disagree about NAT traversal being easy. It isn't too bad in, say, 90% of cases, but there's a long tail of awful edge cases and bad NATs that are terrible to deal with. We know this all too well.

      posted in IT Discussion
      A
      adam.ierymenko
    • RE: If LAN is legacy, what is the UN-legacy...?

      @scottalanmiller People already run PBXes and VOIP over ZeroTier and say it works great. No need to worry about NAT-t, etc.

      posted in IT Discussion
      A
      adam.ierymenko
    • RE: If LAN is legacy, what is the UN-legacy...?

      @Dashrender The economy of scale thing is what I meant by the p2p complexity tax being "regressive" in my presentation on firewalls. The bigger you are, the less it costs to either invest in the engineering required to do p2p well or just back-haul everything to the cloud. If (like MS) you own a bunch of your own data centers, then putting all traffic through your cloud is very cheap due to the scale you already have. So the cloud back-haul requirement intrinsically favors large vendors.

      Personally I think Skype going central was just the MS economy of scale thing. You can do P2P on mobile-- ZeroTier has an Android app and soon an iOS one and they work fine. My phone is always pingable on our company LAN and the impact on battery life is in the fractions of a percent. Of course maybe that's more true today... Skype ported to mobile back when phones had slower single-core CPUs and smaller batteries. Radios have quietly gotten way more efficient too, so the constant low-grade peer-to-peer packet slinging doesn't eat as much battery as it might have with earlier generation LTE and WiFi chipsets.

      posted in IT Discussion
      A
      adam.ierymenko
    • RE: If LAN is legacy, what is the UN-legacy...?

      @Dashrender Decentralization is not all or nothing. You can have a p2p network with a central database that it uses for persistence and missed connections.

      If you want to go all-in on decentralization you can do that with a DHT and crypto, but it's more work and possibly less reliable or slower.

      As far as the feds telling Skype to centralize: I personally doubt this and have always heard it was because they found p2p too hard on mobile. Another reason is they were bought by Microsoft. Centralization's cost decreases exponentially if you already own data centers. It's an economy of scale. So once MS bought them the economic incentive to decentralize was gone and centralization is a more standard way of doing things that more coders understand and it does make some problems simpler.

      posted in IT Discussion
      A
      adam.ierymenko
    • RE: If LAN is legacy, what is the UN-legacy...?

      @Dashrender SDNs are about connectivity and manageability, not security per se -- though they can of course be secure and have lots of security related features. SDN is about being able to have mobile devices with stable addresses, fail-over without interrupting flows, control over where flows go, ability to provision new network paths without pulling cable, seamlessly link locations, fail-over across ISPs and clouds, etc.

      posted in IT Discussion
      A
      adam.ierymenko
    • RE: If LAN is legacy, what is the UN-legacy...?

      @wirestyle22 I was describing a guiding principle. Obviously not everything measures up to that and firewalls are still needed for a lot of situations. I just consider them "legacy" and think that if you're designing or building something new it's best to design it to be secure in itself rather than assuming your private network is always going to stay private. Never trust the network, especially if it might have light bulbs and cloud connected printers on it.

      I also think the firewall's obsolescence is a fact regardless of how I or anyone else might feel about it. IoT, BYOD, and the cloud are killing it so best plan for its death and prepare accordingly. I just happen to be in the camp that's quietly cheering for its demise because I think it's a bad ugly hack that breaks the functionality of networks and endpoint-centric security is better.

      Edit: this is good too: http://etherealmind.com/why-firewalls-wont-matter-in-a-few-years/

      I basically agree with all of that.

      posted in IT Discussion
      A
      adam.ierymenko
    • RE: If LAN is legacy, what is the UN-legacy...?

      @Dashrender "That is no lie - So I can't get what I want, you'll give me this little thing over here, OK I'll just create a way to get what I want through that little thing.. done.. yeah - huge problem!"

      You can't secure things by breaking them. People will find ways around your barriers because they need things to work, and the things they cobble together will probably be less secure than what you started with. You have to secure things by actually securing them.

      Fundamentally the endpoint is either secure or it is not. If it's not, all someone has to do is get into something behind your firewall and they own you. Increasingly that something could be a printer, a light bulb, or a microwave oven. How often do you patch your light bulbs? If the cloud killed the firewall, then IoT will dig it up and cremate it and encase it in concrete and re-bury it.

      My approach to security is: secure everything as if it will be totally exposed on the public Internet, then add firewalls and such as an afterthought if appropriate. If something is not secure enough to be exposed to the public Internet without a firewall, it's not secure enough to be connected to any network ever.

      posted in IT Discussion
      A
      adam.ierymenko
    • RE: If LAN is legacy, what is the UN-legacy...?

      @Dashrender "How do you propose keeping the baddies that are trying to attack you over the web? I understand pull vectors, but what about the push ones?"

      Local firewalls aren't obsolete. They're a pretty good way to limit your surface area. But personally I just like to make sure I'm not running anything I don't need. Also make sure you are up to date on patches, etc.

      But the bottom line is that 90% of baddies aren't attacking you over the web anymore. They're trying to phish, scam, sneak malware, and get you to visit malicious URLs. They've moved "up the stack," abusing vectors like social media, Dropbox/Google Drive, e-mail, etc. This is partly in direct response to the firewall and partly because these types of attacks are a lot more effective.

      Based on real world experience the only exception I'd give to the above is web apps. There was a case where a vulnerable php web app was attacked. But this of course was in the DMZ, so the firewall also did nothing. It was supposed to be exposed! Most people don't run php web apps on desktops and mobile devices.

      I suppose you could still ask: if we got rid of firewalls tomorrow, neglecting unpatched and obsolete OSes would we again see an epidemic of remote attacks? I can't say for sure that we wouldn't but I personally doubt it. You'd see remote attacks against old vulnerable junk but newer patched systems would not fare too badly, and the exposure would probably help harden things more. Firewalls promote immune system atrophy.

      Of course ZeroTier has private certificate-gated networks and that's what most people use. Those are similar to VPN endpoints in risk profile. You can still have your boundary. It's just software defined.

      A bit beyond IT pragmatism, but I gave this presentation a while back about how firewalls contribute to Internet centralization, surveillance, and the monopolization of communication by closed silos like Facebook and Google: https://www.zerotier.com/misc/BorderNone2014-AdamIerymenko-DENY_ALL.pdf

      The core argument I'm making there is that the firewall is a grandfathered-in hack to get around very very bad endpoint security and the fact that IP has no built-in authentication semantics. It's also a fundamentally broken security model since it uses a non-cryptographic credential (IP:port) as a security credential. Non-cryptographic credentials are worthless.

      In a later presentation I distilled the "Red Queen's Race" slides to a "law of protocol bloat": any protocol allowed through the firewall accumulates features until it encapsulates or duplicates all functionality of all protocols blocked by the firewall. Examples: SSH, HTTP. In the end you just end up running an inferior version of IP encapsulated within another protocol.

      posted in IT Discussion
      A
      adam.ierymenko
    • RE: If LAN is legacy, what is the UN-legacy...?

      @Dashrender Here open this attachment!

      No joke though. I really honestly think we could have just taken our firewall down and given every machine a public IP and there would have been little or no change to security posture. If anything, firewalls encourage the "soft underbelly" problem by giving people the illusion that the local network is secure. Take that old obsolete crutch away and people who do things like bind unpassworded databases to ::0 will look like dummies real fast and the problem will take care of itself over time.

      It's been a while since I've seen a completely deadpan naive remote vulnerability in a consumer OS. By "naive" I mean one that can be exploited in the real world with no credentials, special knowledge, or participation from the user. OSes really have gotten better and if you turn off unnecessary services you're probably not in too terribly much danger. The danger isn't nonexistent but it's probably a lot less than, say, browsing the web with five different plugins enabled or the always popular:

      curl http://note_lack_of_https.itotallytrustthissitelol.com/ | sudo bash

      posted in IT Discussion
      A
      adam.ierymenko
    • RE: If LAN is legacy, what is the UN-legacy...?

      @Dashrender Finally, you can count me in the "firewalls are obsolete" camp. I've worked infosec before. During my tenure we had many attacks, and zero were naive remote attacks that the firewall did anything to stop.

      A short summary of real world attack vectors we saw: phishing, phishing, phishing, phishing, phishing, malware, phishing, drive-by downloads, phishing, and phishing. Did I mention phishing? The least secure thing on the network is the meat bag behind the screen, but in all of the above cases the firewall is worthless. That's because all those threat vectors are "pull" based, not "push" based. We had malware get in through the web, e-mail, Dropbox (with phishing), etc., and in all cases it was pulled in over HTTPS and IMAPS links that happily went right through the firewall.

      Firewalls are dead. Thank the cloud.

      posted in IT Discussion
      A
      adam.ierymenko
    • RE: If LAN is legacy, what is the UN-legacy...?

      @Dashrender The answer is a huge pile of "it depends." It depends on protocol, application, OS, etc.

      If you're running a closed/private ZeroTier network, then you're not at much greater risk than if you have a VPN. A public ZeroTier network is obviously exposing you a lot more, but keep in mind that every time you join a coffee shop, hotel, university, or other public WiFi network you are doing the same thing. Every time you join someone's WiFi you are exposing L2.

      So the risk is not as great as you might think. A lot of people think "ZOMG! my machine is exposed I will get hax0r3d in seconds!" This is mostly an obsolete fear. OSes today are a lot more secure than they were in the late 90s / early 2000s when we had remote Windows vulnerability of the week and LAN worms were commonplace. You can still have problems if you have a bunch of remote services enabled but most OSes no longer ship this way.

      If you have ZeroTier and join 8056c2e21c00001 (Earth, our public test net) and ping 29.44.238.229, that's my laptop. If you don't get a ping reply it probably means it's asleep. Obviously I am not worried about it. Of course the only remote service I run is ssh and I don't allow password auth so there isn't a lot of exposed surface area.

      There is still some risk of course. The only way to perfectly secure a computer is to turn it off.

      As far as MITM goes, there are a couple answers there and it depends on the nature of the attack. Network virtualization layers like ZeroTier are generally more secure than cheapo switches or WiFi routers in that the MAC addresses of endpoint devices are cryptographically authenticated. It's harder to spoof endpoints, though it's not impossible. On ZT you can't spoof L2 traffic without stealing someone's identity.secret file. It's a bit like a wired network with 802.1X.

      The only wrinkle is Ethernet bridging, and that's why bridging must be allowed on a per-device basis. Normal devices are not allowed to bridge.

      But... the real answer to MITM is: never trust the network. If you are not authenticating your endpoint cryptographically then you are vulnerable to MITM on every network. Use SSL, SSH, etc. and check certificates or you are not safe.

      posted in IT Discussion
      A
      adam.ierymenko
    • RE: Software Defined WAN

      @dafyre We've considered making a little appliance for this, or a ready-to-run Raspberry Pi image.

      posted in IT Discussion
      A
      adam.ierymenko
    • RE: Software Defined WAN

      @dafyre Big gotchas are (1) designating the node as a bridge on your network at the ZT level, and (2) getting the IP routing issues correct so that hosts on either side of the bridge can actually see each other. Remember that Ethernet is not IP so if a host doesn't know another host's IP range is on the same net it won't route to it that way. Instead it will try to go via default gateway.

      There's also a few weird Linux options such as one that selects whether or not Ethernet bridge packets also traverse iptables. Usually you want this off (forget the actual setting but it's sysctl) but sometimes it can be useful... though it's a bit perverse. There's also Linux ebtables (Ethernet bridge tables) which are also useful for advanced stuff.

      One more tidbit: If you allow all Ethernet frame types on a ZT network, spanning tree protocol will work and your bridges and switches will handle routing loops. It will treat ZT like another switch or LAN segment and work normally. (ZT itself knows nothing about STP but Linux bridging does.)

      posted in IT Discussion
      A
      adam.ierymenko
    • RE: Software Defined WAN

      @dafyre Bridging works much better than I thought it would when I developed that feature. At first I was like "well, technically this is possible but I'm going to call it experimental until we see how it works in practice." I've heard of people using it with whole big LANs behind it, so I'm a bit stunned. 🙂

      posted in IT Discussion
      A
      adam.ierymenko
    • RE: If LAN is legacy, what is the UN-legacy...?

      @scottalanmiller "designed solely around maintaining the LAN ideologically rather than replacing it."

      I'd disagree with that, at least insofar as ZeroTier is concerned. It emulates a LAN because it's convenient to do so: everything just works and software can just speak TCP/IP (or any other protocol). But if anything the goal is to embrace the post-LAN world and evolve away from the LAN model. Making LANs work like Slack channels is a step in this direction.

      I really like what you wrote above and some of it is exactly what I was thinking when I first started working on ZeroTier years ago.

      ZT solves multiple problems: (1) a better p2p VPN/SDN, (2) mobility and stable mobile addressing, (3) providing (1) and (2) everywhere including on vast numbers of WiFi, carrier, and legacy networks that do not permit open bi-directional access to the Internet. Internally we view the existing Internet/Intranet deployment topology with its NAT gateways and such as "the enemy." NAT in particular is the enemy and "break NAT" is an internal development mantra.

      An analogy would be RAID, which seeks to achieve reliability using arrays of unreliable disks. In our case we want to achieve a flat reliable global network by running on top of an inconsistent, half-broken, gated, NATed spaghetti mess.

      IPv6 should have done these things but didn't and probably won't unless IPv6 mobility becomes a real thing and unless we can convince millions upon millions of IT admins to drop the concept of the local firewall. If IPv6 ever does do these things we'll probably have to wait for the 2030s. If that ever does happen ZT was designed with migration paths in mind. Hint: 64-bit network ID + 40 bit device ID < 128-bit IPv6 address.

      Our long term target is not AD or other LAN-centric ways of doing things, which is why we haven't built deeply into AD the way Pertino has. Our long term target is Internet of things, mobile, and apps. If you pull the ZT source you can see this: the ZT network virtualization core is absolutely independent of any os-dependent code and is designed to be able to (eventually, with a little bit more work) build on embedded devices.

      posted in IT Discussion
      A
      adam.ierymenko
    • RE: Software Defined WAN

      @dafyre You can bridge ZeroTier to standard Ethernet, though at the moment it requires some manual configuration work and some expertise with Linux and bridging and such.

      Edit: pretty easy to do with a Raspberry Pi although the USB-wired 100mbit Ethernet on those won't work for really really high bandwidth stuff. Fine for ordinary use though, since the WAN is usually slower than that.

      posted in IT Discussion
      A
      adam.ierymenko
    • RE: ZeroTier and DHCP

      Assign mode 'dhcp' is intended to mean 'enable DHCP on this interface and let the OS query DHCP and get an IP assignment.' But it's not actually implemented yet in the client, so it would do nothing and be equivalent to 'none'.

      DHCP isn't the default method because DHCP is unsafe. If you joined a malicious network, DHCP could be used to push e.g. alternative DNS servers and other settings to your device. Some OSes support all kinds of potentially unsafe settings via DHCP. So it's something that we'd want to only enable with some consideration. Current idea is to require the user to explicitly okay DHCP on a per-network basis before it would ever be used even if 'dhcp' is the assign mode.

      You can use DHCP now by setting assign mode to 'none' and invoking DHCP yourself and it will work.

      posted in IT Discussion
      A
      adam.ierymenko
    • RE: ZeroTier and DNS

      Obviously if you go 'all in' with SDN then your private IPs will just work always, but not everyone can do that.

      posted in IT Discussion
      A
      adam.ierymenko
    • 1 / 1