From Grid-Appliance Wiki
Where To Get P2PNode and its Dependencies
P2PNode Requires the following:
- An IPOP binary release or the Brunet Sources, see IPOP#Software:_Current_Release
- On Linux, Mono, see How_to_install_gcc_and_mono_on_Linux
If you chose a source bundle, run nant in the base directory of brunet to compile the entire project including P2PNode.
P2PNode provides a Brunet P2P based infrastructure which is currently used by Brunet's Dht, IPOP, and other Brunet services. As with all P2P systems, P2PNode and hence Brunet require a minimum set of nodes, therefore it is strongly recommended that a minimum of 8 nodes be used for an overlay. Nodes can be run on the same machine or on machines in different parts of the world. To assist in small bootsrap environment, P2PNode allows for multiple nodes to run in the same process, similar to the Chord virtual nodes.
Taking a Test Run
Deploying a System
Two use cases:
- If for example a user wanted to provide a service for users behind two different NATs to communicate, he needs at least one P2PNode on a public address and then either through NAT traversal or through tunneling the P2PNodes running behind the NATs could communicate.
- If a user wanted to deploy the system in a pre-existing LAN environment. The user could deploy one P2PNode with 8 virtual nodes executing on any machine. This case uses discovery, which is discussed below.
The important thing to remember is that all nodes must have some way to discover each other. The simplest case is to have at least one "publicly" available node and include him in the configuration.
The configuration information for P2PNode is provided by the NodeConfig. Let's begin reviewing the information there with respect to setting up the first case mentioned above. Below is what a sample NodeConfig xml file might look like. This is a configuration file that you could have all the machines in the realm sharing.
<NodeConfig> <BrunetNamespace>GenericRealm</BrunetNamespace> <RemoteTAs> <Transport>brunet.udp://22.214.171.124:12342</Transport> </RemoteTAs> <EdgeListeners> <EdgeListener type="udp"> <port>12342</port> </EdgeListener> </EdgeListeners> <XmlRpcManager> <Enabled>true</Enabled> <Port>10000</Port> </XmlRpcManager> </NodeConfig>
Let's look at this line by line...
The first thing is selecting a BrunetNamespace. Only Nodes that share a BrunetNamespace are able to form P2P systems. This should be unique to each pool you deploy.
The RemoteTAs is a list of well known end points, where you know an instance of a P2PNode is running. In the simplest case, you may only have one publicly available node, but if you expand to a large system you may want to increase the number of active public nodes. The format of the string is a common uri, the important parts are the "udp" which specifies the transport layer logic type and the IP Address and the port of public node where P2PNode is running. There are other choices here as well as Discovery, which makes this section optional.
<RemoteTAs> <Transport>brunet.udp://126.96.36.199:12342</Transport> </RemoteTAs>
The EdgeListeners lists our local end points, in this case the port is optional, but for simplicity reasons we will pick a specific port to run on. By doing this, we can have one generic configuration file for all nodes to share. Also note, that if a port is already taken, P2PNode will attempt to use a random port to connect through. The type of transport logic is listed per EdgeListener and in this case it is UDP. We strongly encourage the use of UDP as each TCP connection requires an additional socket and can be costly on loaded machines.
<EdgeListeners> <EdgeListener type="udp"> <port>12342</port> </EdgeListener> </EdgeListeners>
The final chunk of the NodeConfig describes the XmlRpc services including Dht provided by P2PNode. This requires a user specifying a port and setting enabled to true. P2PNode and Brunet do not monitor where requests come from, so if you enable these services, please take care to protect you and your P2P system by a firewall, if necessary. Please note that even if you do not enable XmlRpc, that node will still function as a Dht provider and be able to handle XmlRpc calls over Brunet, the Node will not handle requests directly.
<XmlRpcManager> <Enabled>true</Enabled> <Port>10000</Port> </XmlRpcManager>
By enabling XmlRpc, users will be able to access the Brunet Dht and Rpc systems via userland tools. A particular favorite method of accessing them has been through the use of Python. This sample code calls asks the local Brunet Node for its Information.
#!/usr/bin/python import xmlrpclib, sys server = xmlrpclib.Server("http://127.0.0.1:10000/xm.rem") print server.localproxy("Information.Info")
This sample code asks the local Brunet Node for its Information. The 3 represents that we want only to talk to that exact address and the 1 states we only expect one result. For more information, please see ...
#!/usr/bin/python import xmlrpclib, sys server = xmlrpclib.Server("http://127.0.0.1:10000/xm.rem") print server.proxy("brunet:node:XGPPYRFCACGGF3PZWSCDUCK6LGXJGK4M", 3, 1, "Information.Info")
To access the Dht, to helper scripts have been written bput.py for putting information into the dht and bget.py for getting information from the dht. The internals for those will have to be tweaked to make sure that the port maps to the DhtRpc port. The default port is 64221. Helpful information will be printed on the screen if you run them with no input.
bget.py usage: bget.py [--output=<filename to write value to>] [--quiet] <key> bput.py usage: bput.py [--ttl=<time in sec>] [--input=<filename, - for stdin>] <key> [<value>]
bget.py and bput.py are available scripts directory in the source code and binary release.
P2PNode does not inherently provide a method to determine the health of the pool. www.grid-appliance.org uses something called a crawler to determine the state of the publicly available pools. The crawler is provided below. The crawler uses consistency to determine the health of a pool. A consistency of 1.0 is considered perfect and dht operations are known to work with consistencies greater than .95 (and perhaps lower). Consistency is measured as a node agreeing with both the left 2 and right 2 neighbors about their position in the ring.
crawl.py is available in the script directory in the source code and binary release.
Using Brunet Discovery
If Brunet Nodes are on the same Layer 2 Network, they will be able to discover each other through LocalConnectionOverlord which uses IPHandler. The advantage to using this is that you will not have to specify any well known remote end points. The only requirement is that they are all in the same BrunetNamespace.
Discovery is not quite complete at this time, because if two independent networks form and eventually are able to multicast to each other, they will not combine into one large pool. This is a rare situation but does deserve attention.
Setting Up Your Own Pool in Minutes
By using Discovery, one can have a full pool in minutes. In config directory, there is a file called local.config. By using this as the configuration file, you can create a quick pool. So to get a working system in .NET, one would have to run these two commands:
[mono] P2PNode.exe -nlocal.config -c20
After 30 seconds or so, connections should start forming and this can be confirmed by using the crawler.