In this assignment you will build a small peer-to-peer resource
distribution system. You will also get to practice remote procedure
calls in Go.
High-level description
The system has two kinds of nodes: a peer (that you
will implement) and a server (that we will implement). An instance of
the system will run some number of copies of your peer code, either as
several processes on a single machine, or as processes across several
machines. The peers need to collectively coordinate their interaction
with the server in order to (1) initiate a session, (2) retrieve a
finite set of resources, and (3) place these resources on the right
peers. Once this is completed, each peer will print out the resources
it is hosting and exit. The peer-to-server protocol is well specified
(below). The peer-to-peer protocol is unspecified and is up to you to
design and implement.
The details
The server listens to connections from peers and expects two kinds of
RPC invocations: InitSession
and GetResource. The InitSession initiates a session
and returns a token called sessionID that must be used in the call
to GetResource. The GetResource call returns a
string, that we term a resource, and also returns the peer that must
host the resource. Your peer code must appropriately route the
resource to the right peer, which will then store it. More precisely,
these RPCs look like this:
- sessionID ← InitSession(numPeers)
-
Initiates a new session. Takes the number of peers as a positive
integer argument and returns a positive integer sessionID to the
caller.
- [resource, peerID, numRemaining] ← GetResource(sessionID)
-
Returns a resource for a previously initialized session with ID
sessionID. Returns three items:
- A resource; an arbitrary string.
- An ID of a peer that must host the resource. The peer ID is
an integer between 1 and numPeers, inclusive.
- Number of remaining resources (i.e., resources available
through further invocations of GetResource). This
number will be greater or equal to 0.
In interacting with the server the peer group that executes your peer
code must satisfy three constraints:
-
Constraint1: The sessionID RPC must be invoked
exactly once and must have the correct number of peers as an
argument.
-
Constraint2: Two consecutive invocations of the GetResource RPC cannot
come from the same peer.
-
Constraint3: The GetResource RPC must be invoked
exactly (1 + numR) number of times, where numR
is the numRemaining value received from the first call
to GetResource (i.e., once server returns
a numRemaining value of 0, the client should make no more
calls to the server).
These are hard constraints: you must satisfy these constraints to pass
this assignment.
In addition, your peer group must satisfy three requirements:
-
Resource distribution requirement: for every pair
[resource, peerID] returned by a call
to GetResource, the peer with ID peerID must
eventually store resource.
-
Resource printing requirement: each resource stored by a
peer must be printed to its stdout and followed by a newline.
-
Termination requirement: after all of the resources have
been retrieved from the server and have been printed by the peers
that are supposed to store them, all peers must terminate.
You have two important degrees of design freedom in this assignment:
the protocol between peers, and the algorithm by which you satisfy the
above constraints and requirements. For example, your peer protocol
can build on UDP/TCP, on RPC, or another abstraction.
You will test and debug your solution against a server that we will
give to you. Your peer must be run with four arguments: number of
peers in the group, the local peer ID, a file with IP:port pairs for
peers to use when contacting other peers, and the RPC TCP IP:port to
use when contacting the server.
Assumptions you can make
- The server is reachable, does not fail/restart, and does not
misbehave.
- No network failures and no peer failures.
- Each peer has a unique peer ID (specified on the command
line).
- All peer processes will be invoked within 2 seconds of each
other.
- There are no ordering constraints on when peers terminate
relative to one another, or when peers print the resources,
including the peer-local ordering of the printed resources.
Assumptions you cannot make
- Nodes have synchronized clocks (e.g., running on the same
physical host).
- Perfectly reliable network (e.g., if you use UDP for your peer
protocol, expect loss and reordering)
Implementation requirements
- The client code must be runnable on CS ugrad machines and be
compatible with Go version 1.6.3
- Your solution can only
use standard library Go
packages.
- Your solution code must be Gofmt'd
using gofmt.
Solution spec
Write a single go program called peer.go that acts as a
peer in the protocol described above. Your program must implement
the following command line usage:
go run peer.go [numPeers] [peerID] [peersFile] [server ip:port]
- [numPeers] : number of peers in the group, an integer equal to
or higher than 1.
- [peerID] : the identity of this peer, an integer between 1 and
numPeers, inclusive.
- [peersFile] : a file with numPeers lines. Each line in this
file has a unique IP:port address of a peer. The IP:port on
line i should be used by peer with
peerID i.
- [server ip:port] : the TCP address on which the server
receives new client RPC connections
Starter code
Download the starter code. Please carefully read
and follow the RPC data structure comments at top of file.
Solution code
Download the solution.
Testing server
Download the server binary. The server binary
should be run on a CS ugrad server with one arguments, the TCP IP:port
on which to listen for incoming RPC connections from peers.
Rough grading rubric
- 20%: Solution satisfies constraint1.
- 30%: Solution satisfies constraint2.
- 10%: Solution satisfies constraint3.
- 25%: Solution satisfies the resource distribution requirement.
- 10%: Solution satisfies the resource printing requirement.
- 5%: Solution satisfies the termination requirement.
Make sure to follow the
course collaboration policy and refer
to the assignments instructions
that detail how to submit your solution.
|