https://chocolatey.org/packages/zerotier-one/1.1.14
Seems approved.
We might be able to attend. Smaller conferences like this with knowledgeable audiences are usually more valuable than mega-events with tens of thousands of people.
Root of the problem is that none of these protocols (DNS, AD, DHCP, etc.) were designed for a world in which a client can belong to more than one network.
@scottalanmiller Every ZT device has a cryptographically-defined identity, so any time it gets a packet it knows who sent it. It can then try various paths for connectivity and use them if a bi-directional link is determined to be present. ZeroTier devices on the same virtual network try each other over their local IPs as well as via NAT-t and other methods and if that works they prefer local to global. But if it stops working they'll fall back to whatever works according to a preference order based on IP scope/class and type (V6 over V4, local over global, direct over indirect).
It's open source if you're curious: https://github.com/zerotier/ZeroTierOne
There can be issues if a network controller is down for a long time because certs have (effective) TTLs, so an old node that's been offline could be unable to communicate. But it would have to be down for a while. Since ZT addresses are portable if a controller goes down it can be brought up elsewhere with the same identity (failover).
We're adding multi-homing soon, which will make this even more robust:
https://github.com/zerotier/ZeroTierOne/blob/adamierymenko-dev/node/Cluster.hpp#L71
Multi-homing will also be useful for nodes within networks. For example, you could create a global Cassandra cluster behind a single IP on your virtual LAN. Next version should contain an alpha version of cluster/multi-homing capability.
@Dashrender How long was the laptop asleep? If it was a while it's possible that its cert was no longer valid and it couldn't get a new one.
Unlucky moment... multi-homing/cluster of network controllers should make that orders of magnitude less likely. We're doing a lot of robustness work right now (not that it's bad as-is).
A lot of our users will place their intranet's DNS under their domain and use that -- so e.g. ours is int.zerotier.com and git.int.zerotier.com resolves to an internal IP. This will work regardless of what DNS servers you are actually using.
Sometimes that's not an option. In that case the best thing might be to manually override DHCP DNS and set your intranet's servers as your DNS servers. ZeroTier does not itself depend upon DNS to work properly, and this is why.
@scottalanmiller "designed solely around maintaining the LAN ideologically rather than replacing it."
I'd disagree with that, at least insofar as ZeroTier is concerned. It emulates a LAN because it's convenient to do so: everything just works and software can just speak TCP/IP (or any other protocol). But if anything the goal is to embrace the post-LAN world and evolve away from the LAN model. Making LANs work like Slack channels is a step in this direction.
I really like what you wrote above and some of it is exactly what I was thinking when I first started working on ZeroTier years ago.
ZT solves multiple problems: (1) a better p2p VPN/SDN, (2) mobility and stable mobile addressing, (3) providing (1) and (2) everywhere including on vast numbers of WiFi, carrier, and legacy networks that do not permit open bi-directional access to the Internet. Internally we view the existing Internet/Intranet deployment topology with its NAT gateways and such as "the enemy." NAT in particular is the enemy and "break NAT" is an internal development mantra.
An analogy would be RAID, which seeks to achieve reliability using arrays of unreliable disks. In our case we want to achieve a flat reliable global network by running on top of an inconsistent, half-broken, gated, NATed spaghetti mess.
IPv6 should have done these things but didn't and probably won't unless IPv6 mobility becomes a real thing and unless we can convince millions upon millions of IT admins to drop the concept of the local firewall. If IPv6 ever does do these things we'll probably have to wait for the 2030s. If that ever does happen ZT was designed with migration paths in mind. Hint: 64-bit network ID + 40 bit device ID < 128-bit IPv6 address.
Our long term target is not AD or other LAN-centric ways of doing things, which is why we haven't built deeply into AD the way Pertino has. Our long term target is Internet of things, mobile, and apps. If you pull the ZT source you can see this: the ZT network virtualization core is absolutely independent of any os-dependent code and is designed to be able to (eventually, with a little bit more work) build on embedded devices.
@Dashrender "How do you propose keeping the baddies that are trying to attack you over the web? I understand pull vectors, but what about the push ones?"
Local firewalls aren't obsolete. They're a pretty good way to limit your surface area. But personally I just like to make sure I'm not running anything I don't need. Also make sure you are up to date on patches, etc.
But the bottom line is that 90% of baddies aren't attacking you over the web anymore. They're trying to phish, scam, sneak malware, and get you to visit malicious URLs. They've moved "up the stack," abusing vectors like social media, Dropbox/Google Drive, e-mail, etc. This is partly in direct response to the firewall and partly because these types of attacks are a lot more effective.
Based on real world experience the only exception I'd give to the above is web apps. There was a case where a vulnerable php web app was attacked. But this of course was in the DMZ, so the firewall also did nothing. It was supposed to be exposed! Most people don't run php web apps on desktops and mobile devices.
I suppose you could still ask: if we got rid of firewalls tomorrow, neglecting unpatched and obsolete OSes would we again see an epidemic of remote attacks? I can't say for sure that we wouldn't but I personally doubt it. You'd see remote attacks against old vulnerable junk but newer patched systems would not fare too badly, and the exposure would probably help harden things more. Firewalls promote immune system atrophy.
Of course ZeroTier has private certificate-gated networks and that's what most people use. Those are similar to VPN endpoints in risk profile. You can still have your boundary. It's just software defined.
A bit beyond IT pragmatism, but I gave this presentation a while back about how firewalls contribute to Internet centralization, surveillance, and the monopolization of communication by closed silos like Facebook and Google: https://www.zerotier.com/misc/BorderNone2014-AdamIerymenko-DENY_ALL.pdf
The core argument I'm making there is that the firewall is a grandfathered-in hack to get around very very bad endpoint security and the fact that IP has no built-in authentication semantics. It's also a fundamentally broken security model since it uses a non-cryptographic credential (IP:port) as a security credential. Non-cryptographic credentials are worthless.
In a later presentation I distilled the "Red Queen's Race" slides to a "law of protocol bloat": any protocol allowed through the firewall accumulates features until it encapsulates or duplicates all functionality of all protocols blocked by the firewall. Examples: SSH, HTTP. In the end you just end up running an inferior version of IP encapsulated within another protocol.
@wirestyle22 I was describing a guiding principle. Obviously not everything measures up to that and firewalls are still needed for a lot of situations. I just consider them "legacy" and think that if you're designing or building something new it's best to design it to be secure in itself rather than assuming your private network is always going to stay private. Never trust the network, especially if it might have light bulbs and cloud connected printers on it.
I also think the firewall's obsolescence is a fact regardless of how I or anyone else might feel about it. IoT, BYOD, and the cloud are killing it so best plan for its death and prepare accordingly. I just happen to be in the camp that's quietly cheering for its demise because I think it's a bad ugly hack that breaks the functionality of networks and endpoint-centric security is better.
Edit: this is good too: http://etherealmind.com/why-firewalls-wont-matter-in-a-few-years/
I basically agree with all of that.
@Dashrender Decentralization is not all or nothing. You can have a p2p network with a central database that it uses for persistence and missed connections.
If you want to go all-in on decentralization you can do that with a DHT and crypto, but it's more work and possibly less reliable or slower.
As far as the feds telling Skype to centralize: I personally doubt this and have always heard it was because they found p2p too hard on mobile. Another reason is they were bought by Microsoft. Centralization's cost decreases exponentially if you already own data centers. It's an economy of scale. So once MS bought them the economic incentive to decentralize was gone and centralization is a more standard way of doing things that more coders understand and it does make some problems simpler.
@dafyre It shouldn't really have to proxy arp in theory. The arps should cross the bridge and "just work." I could see proxy arp making things more reliable though.
@dafyre Maybe proxy arp is actually in the way.
@wrx7m We've considered looking into this but (a) we don't use AD or Windows much at all, and (b) default gateway, while planned, is complex for us and is currently behind a few other more IoT/P2P focused efforts.
Default gateway is hard for ZT because it's p2p. Normal tunnel VPNs can do default gateway by simply excepting traffic from their upstream endpoint, but ZT has to except all its traffic to N random endpoints that are constantly changing. There are ways to do this by binding in the right way to the right interface, etc., but it involves OS-specific hacking and some refactoring. Can be done but hasn't been done yet.
As far as AD goes, our impression for a while has been that everything's moving to Microsoft's cloud AD service. As a result we find heroics to support legacy AD to be of debatable utility. It's something we plan to investigate once we have a bit more resources (which is hopefully soon) but for now the largest amount of paying customer attention we've received is from people who want P2P network overlays for IoT and distributed systems applications. Those don't care about either of these features but they do care a lot about reliability, monitoring, uptime, etc.
@dafyre I'll take a look, but in my experience bridging is always confusing to set up when you have any boundary between how things like IPs are allocated. One of the things on our to-do list is to ship a preconfigured Raspberry Pi config or image that does bridging easily.
Yeah the new features are new. The first is monitoring -- it'll e-mail and SMS you (if you set SMS number) if a monitored device goes offline. Other new features are in development and include things like exit gateway as a service, port/web inbound forwards into your network, etc.
Our thought is to build more value into the upgraded plans over time.
One wrinkle you should be aware of (this needs to be fixed in our web UI!) -- if you change the ZT-managed range from /24 to /23 or /22 you will also need to change it on all the devices. We should add a feature to renumber automatically since right now it's tedious.
DNS is fundamentally not designed for concurrent use on more than one network.
Try it again -- I think I found an issue. It should complete the URL with $releasever
but was not in some cases.