| |
- __builtin__.object
-
- Cluster
- Experiment
- NodeWrapper
- Worker
- TunHelper
class Cluster(__builtin__.object) |
|
Class used to manage a cluster of Workers.
Manage a set of Workers via this class. A cluster can run one
Experiment at a time. If you've got several Experiments to run do
not destroy/recreate this class but define several Experiment
instances and run them sequentially.
Attributes:
config: Instance of Tools.Config to query MaxiNet configuration.
frontend: Instance of MaxiNet.Frontend.client.Frontend which
is used to manage the pyro Server.
hostname_to_worker: dictionary which translates hostnames into Worker
instances
hosts: List of worker hostnames.
ident: random integer which identifies this cluster instance on the
FrontendServer.
localIP: IPv4 address of the Frontend.
logger: Logging instance.
nameserver: pyro nameserver
nsport: Nameserver port number.
manager: MaxiNetManager instance hosted by FrontendServer which manages
Workers.
sshtool: SSH_Tool instance which is used to manage ssh client on frontend
machine.
tunhelper: Instance of TunHelper to enumerate tunnel instances.
worker: List of worker instances. Index of worker instance in
sequence must be equal to worker id. |
|
Methods defined here:
- __init__(self, ip=None, port=None, password=None, minWorkers=None, maxWorkers=None)
- Inits Cluster class.
Args:
ip: IP address of FrontendServer nameserver.
port: port of FrontendServer nameserver.
password: password of FrontendServer nameserver.
maxWorkers: number of workers to allocate to this cluster; None for "all you can get".
minWorkers: minimum number of workers to allocate to this cluster; None for "at least 1"
- add_worker(self)
- Add worker
Reserves a Worker for this Cluster on the FrontendServer and adds it to
the Cluster instance. Fails if no unreserved Worker is available on the
FrontendServer.
Returns:
True if worker was successfully added, False if not.
- add_worker_by_hostname(self, hostname)
- Add worker by hostname
Reserves a Worker for this Cluster on the FrontendServer and adds it to
the Cluster instance. Fails if Worker is reserved by other Cluster or
no worker with that hostname exists.
Args:
hostname: hostname of Worker
Returns:
True if worker was successfully added, False if not.
- add_workers(self)
- Add all available workers
Reserves all unreserved Workers for this Cluster on the FrontendServer
and adds them to the Cluster instance.
Returns:
Number of workers added.
- create_tunnel(self, w1, w2)
- Create GRE tunnel between workers.
Create gre tunnel connecting worker machine w1 and w2 and return
name of created network interface. Querys TunHelper instance to
create name of tunnel.
Args:
w1: Worker instance.
w2: Woker instance.
Returns:
Network interface name of created tunnel.
- get_available_workers(self)
- Get list of worker hostnames which are not reserved.
Returns:
list of hostnames of workers which are registered on the FrontendServer
but not reserved by this or another Cluster instance.
- get_status_is_alive(self)
- Get the status of this Cluster object.
returns true if the object is still alive.
this function is periodically called from the FrontendServer to check if the cluster still exists
otherwise its allocated resources (workers) are freed for future use by other clusters
- get_worker(self, hostname)
- Return worker instance of worker with hostname hostname.
Args:
hostname: worker hostname
Returns:
Worker instance
- num_workers(self)
- Return number of worker nodes in this Cluster.
- remove_all_tunnels(self)
- Shut down all tunnels on all workers.
- remove_worker(self, worker)
- Remove worker from Cluster
Removes a Worker from the Cluster and makes it available for other
Cluster instances on the FrontendServer.
Args:
worker: hostname or Worker instance of Worker to remove.
- remove_workers(self)
- Remove all workers from this cluster
Removes all Workers from the Cluster and makes them available for other
Cluster instances on the FrontendServer.
- workers(self)
- Return sequence of worker instances for this cluster.
Returns:
Sequence of worker instances.
Data descriptors defined here:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class Experiment(__builtin__.object) |
|
Class to manage MaxiNet Experiment.
Use this class to specify an experiment. Experiments are created for
one-time-usage and have to be stopped in the end. One cluster
instance can run several experiments in sequence.
Attributes:
cluster: Cluster instance which will be used by this Experiment.
config: Config instance to queury config file.
controller: Controller class to use in Experiment.
hostname_to_workerid: Dict to map hostnames of workers to workerids
hosts: List of host NodeWrapper instances.
isMonitoring: True if monitoring is in use.
logger: Logging instance.
nodemapping: optional dict to map nodes to specific workers ids.
nodes: List of NodeWrapper instances.
node_to_worker: Dict to map node name (string) to worker instance.
node_to_wrapper: Dict to map node name (string) to NodeWrapper
instance.
origtopology: Unpartitioned topology if topology was partitioned
by MaxiNet.
shares: list to map worker ids to workload shares. shares[x] is used to
obtain the share of worker id x.
starttime: Time at which Experiment was instanciated. Used for
logfile creation.
switch: Default mininet switch class to use.
switches: List of switch NodeWrapper instances.
topology: instance of MaxiNet.Frontend.paritioner.Clustering
tunnellookup: Dict to map tunnel tuples (switchname1,switchname2)
to tunnel names. Order of switchnames can be ignored as both
directions are covered.
workerid_to_hostname: dict to map workerids to hostnames of workers to
worker ids. |
|
Methods defined here:
- CLI(self, plocals, pglobals)
- Open interactive command line interface.
Arguments are used to allow usage of python commands in the same
scope as the one where CLI was called.
Args:
plocals: Dictionary as returned by locals()
pglobals: Dictionary as returned by globals()
- __init__(self, cluster, topology, controller=None, is_partitioned=False, switch=<class 'mininet.node.UserSwitch'>, nodemapping=None, hostnamemapping=None, sharemapping=None)
- Inits Experiment.
Args:
cluster: Cluster instance.
topology: mininet.topo.Topo (is_partitioned==False) or
MaxiNet.Frontend.partitioner.Clustering
(is_partitioned==True) instance.
controller: Optional IPv4 address of OpenFlow controller.
If not set controller IP from MaxiNet configuration will
be used.
is_partitioned: Optional flag to indicate whether topology
is already paritioned or not. Default is unpartitioned.
switch: Optional Switch class to use in Experiment. Default
is mininet.node.UserSwitch.
nodemapping: Optional dict to map nodes to specific worker
ids (nodename->workerid). If given needs to hold worker
ids for every node in topology.
hostnamemapping: Optional dict to map workers by hostname to
worker ids. If provided every worker hostname has to be mapped
to exactly one id. If the cluster consists of N workers valid ids
are 0 to N-1.
sharemapping: Optional list to map worker ids to workload shares.
sharemapping[x] is used to obtain the share of worker id x. Takes
precedence over shares configured in config file. If given needs
to hold share for every worker.
- addController(self, name='c0', controller=None, wid=None, pos=None, **params)
- Add controller at runtime.
Use wid to specifiy worker id or pos to specify worker of
existing node. If none is given random worker is chosen.
Args:
name: Controller name.
controller: Optional mininet class to use for instanciation.
wid: Optional worker id to place node.
pos: Optional existing node name whose worker should be used
as host of node.
**params: parameters to use at mininet controller class
instanciation.
- addHost(self, name, cls=None, wid=None, pos=None, **params)
- Add host at runtime.
Use wid to specifiy worker id or pos to specify worker of
existing node. If none is given random worker is chosen.
Args:
name: Host name.
cls: Optional mininet class to use for instanciation.
wid: Optional worker id to place node.
pos: Optional existing node name whose worker should be used
as host of node.
**params: parameters to use at mininet host class
instanciation.
- addLink(self, node1, node2, port1=None, port2=None, cls=None, autoconf=False, **params)
- Add link at runtime.
Add link at runtime and create tunnels between workers if
necessary. Will not work for mininet.node.UserSwitch switches.
Be aware that tunnels will only work between switches so if you
want to create a link using a host at one side make sure that
both nodes are located on the same worker.
autoconf parameter handles attach() and config calls on switches and
hosts.
Args:
node1: Node name or NodeWrapper instance.
node2: Node name or NodeWrapper instance.
port1: Optional port number of link on node1.
port2: Optional port number of link on node2.
cls: Optional class to use on Link creation. Be aware that
only mininet.link.Link and mininet.link.TCLink are
supported for tunnels.
autoconf: mininet requires some calls to makIe newly added
tunnels work. If autoconf is set to True MaxiNet will
issue these calls automatically.
Raises:
RuntimeError: If cls is not None or Link or TCLink and
tunneling is needed.
- addNode(self, name, wid=None, pos=None)
- Do bookkeeping to add a node at runtime.
Use wid to specifiy worker id or pos to specify worker of
existing node. If none is given random worker is chosen.
This does NOT actually create a Node object on the mininet
instance but is a helper function for addHost etc.
Args:
name: Node name.
wid: Optional worker id to place node.
pos: Optional existing node name whose worker should be used
as host of node.
- addSwitch(self, name, cls=None, wid=None, pos=None, **params)
- Add switch at runtime.
Use wid to specifiy worker id or pos to specify worker of
existing node. If none is given random worker is chosen.
Args:
name: Switch name.
cls: Optional mininet class to use for instanciation.
wid: Optional worker id to place node.
pos: Optional existing node name whose worker should be used
as host of node.
**params: parameters to use at mininet switch class
instanciation.
- configLinkStatus(self, src, dst, status)
- Change status of link.
Change status (up/down) of link between two nodes.
Args:
src: Node name or NodeWrapper instance.
dst: Node name or NodeWrapper instance.
status: String {up, down}.
- find_worker(*args, **kwargs)
- Get worker instance which emulates the specified node.
Replaced by get_worker.
Args:
node: nodename or NodeWrapper instance.
Returns:
Worker instance
- generate_hostname_mapping(self)
- generates a hostname-> workerid mapping dictionary
- get(self, node)
- Return NodeWrapper instance that is specified by nodename.
Alias for get_node.
Args:
node: Nodename or nodewrapper instance.
Returns:
NodeWrapper instance with name nodename or None if none is
found.
- get_log_folder(self)
- Get folder to which log files will be saved.
Returns:
Logfile folder as String.
- get_node(self, node)
- Return NodeWrapper instance that is specified by nodename.
Args:
node: Nodename or nodewrapper instance.
Returns:
NodeWrapper instance with name nodename or None if none is
found.
- get_worker(self, node)
- Get worker instance which emulates the specified node
Args:
node: Nodename or NodeWrapper instance.
Returns:
Worker instance
- is_valid_hostname_mapping(self, d)
- checks whether hostname -> workerid mappign is valid
(every worker has exactly one workerid, workerids are contiguos from 0
upwards)
- log_cpu(self)
- Log cpu useage of workers.
Places log files in /tmp/maxinet_logs/.
- log_cpu_of_worker(self, worker)
- Log cpu usage of worker.
Places log file in /tmp/maxinet_logs/.
- log_free_memory(self)
- Log memory usage of workers.
Places log files in /tmp/maxinet_logs.
Format is:
timestamp,FreeMemory,Buffers,Cached
- log_interface(self, worker, intf)
- Log statistics of interface of worker.
Places logs in /tmp/maxinet_logs.
Format is:
timestamp,received bytes,sent bytes,received packets,sent packets
- log_interfaces_of_node(self, node)
- Log statistics of interfaces of node.
Places logs in /tmp/maxinet_logs.
Format is:
timestamp,received bytes,sent bytes,received packets,sent packets
- monitor(self)
- Log statistics of worker interfaces and memory usage.
Places log files in /tmp/maxinet_logs.
- name(self, node)
- Get name of network node.
Args:
node: Node name or NodeWrapper instance.
Returns:
String of node name.
- run_cmd_on_host(*args, **kwargs)
- Run cmd on mininet host.
Run cmd on emulated host specified by host and return
output.
This function is deprecated and will be removed in a future
version of MaxiNet. Use Experiment.get(node).cmd() instead.
Args:
host: Hostname or NodeWrapper instance.
cmd: Command to run as String.
- setMTU(self, host, mtu)
- Set MTUs of all Interfaces of mininet host.
Args:
host: NodeWrapper instance.
mtu: MTU value.
- setup(self)
- Start experiment.
Partition topology (if needed), assign topology parts to workers and
start mininet instances on workers.
Raises:
RuntimeError: If Cluster is too small.
- stop(self)
- Stop experiment and shut down emulation on workers.
- terminate_logging(self)
- Stop logging.
Data descriptors defined here:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class NodeWrapper(__builtin__.object) |
|
Wrapper that allows most commands that can be used in mininet to be
used in MaxiNet as well.
Whenever you call for example
> exp.get("h1")
you'll get an instance of NodeWrapper which will forward calls to
the respective mininet node.
Mininet method calls that SHOULD work:
"cleanup", "read", "readline", "write", "terminate",
"stop", "waitReadable", "sendCmd", "sendInt", "monitor",
"waitOutput", "cmd", "cmdPrint", "pexec", "newPort",
"addIntf", "defaultIntf", "intf", "connectionsTo",
"deleteIntfs", "setARP", "setIP", "IP", "MAC", "intfIsUp",
"config", "configDefault", "intfNames", "cgroupSet",
"cgroupGet", "cgroupDel", "chrt", "rtInfo", "cfsInfo",
"setCPUFrac", "setCPUs", "defaultDpid", "defaultIntf",
"connected", "setup", "dpctl", "start", "stop", "attach",
"detach", "controllerUUIDs", "checkListening"
Mininet attributes that SHOULD be queryable:
"name", "inNamespace", "params", "nameToIntf", "waiting"
Attributes:
nn: Node name as String.
worker: Worker instance on which node is hosted. |
|
Methods defined here:
- __getattr__(self, name)
- __init__(self, nodename, worker)
- Inits NodeWrapper.
The NodeWrapper does not create a node on the worker. For this
reason the node should already exist on the Worker when
NodeWrapper.__init__ gets called.
Args:
nodename: Node name as String
worker: Worker instance
- __repr__(self)
Data descriptors defined here:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class TunHelper |
|
Class to manage tunnel interface names.
This class is used by MaxiNet to make sure that tunnel interace
names are unique.
WARNING: This class is not designed for concurrent use!
Attributes:
tunnr: counter which increases with each tunnel.
keynr: counter which increases with each tunnel. |
|
Methods defined here:
- __init__(self)
- Inits TunHelper
- get_key_nr(self)
- Get key number.
Returns a number to use when creating a new tunnel.
This number will only be returned once by this method.
(see get_last_key_nr)
Returns:
Number to use for key in tunnel creation.
- get_last_key_nr(self)
- Get last key number.
Returns the last number returned by get_key_nr.
Returns:
Number to use for key in tunnel creation.
- get_last_tun_nr(self)
- Get last tunnel number.
Returns the last number returned by get_tun_nr.
Returns:
Number to use for tunnel creation.
- get_tun_nr(self)
- Get tunnel number.
Returns a number to use when creating a new tunnel.
This number will only be returned once by this method.
(see get_last_tun_nr)
Returns:
Number to use for tunnel creation.
|
class Worker(__builtin__.object) |
|
Worker class used to manage an individual Worker host.
A Worker is part of a Cluster and runs a part of the emulated
network. A Worker is identified by its hostname.
The Worker class is instanciated when a Worker is added to a Cluster.
Attributes:
config: instance of class MaxiNetConfig
mininet: remote instance of class MininetManager which is used to
create and manage mininet on the Worker machine.
server: remote instance of class WorkerServer which is used to run
commands on the Worker machine.
switch: default mininet switch class to use in mininet instances.
ssh: instance of class SSH_Manager used to configure the ssh daemon
on the worker.
sshtool: instance of class SSH_Tool used to manage the ssh client on
the frontend machine. |
|
Methods defined here:
- __init__(self, nameserver, pyroname, pyropw, sshtool, switch=<class 'mininet.node.UserSwitch'>)
- Init Worker class.
- addController(self, name='c0', controller=None, **params)
- Add controller at runtime.
You probably want to use Experiment.addController as this does
some bookkeeping on nodes etc.
Args:
name: controllername to add. Must not already exist on Worker.
controller: mininet controller class to use.
**params: Additional parameters for cls instanciation.
Returns:
controllername
- addHost(self, name, cls=None, **params)
- Add host at runtime.
You probably want to use Experiment.addHost as this does some
bookkeeping of nodes etc.
Args:
name: Nodename to add. Must not already exist on Worker.
cls: Node class to use.
**params: Additional parameters for cls instanciation.
Returns:
nodename
- addLink(self, node1, node2, port1=None, port2=None, cls=None, **params)
- Add link at runtime.
You probably want to use Experiment.addLink as this does some
bookkeeping.
Args:
node1: nodename
node2: nodename
port1: optional port number to use on node1.
port2: optional port number to use on node2.
cls: optional class to use when creating the link.
Returns:
Tuple of the following form: ((node1,intfname1),
(node2,intfname2)) where intfname1 and intfname2 are the
names of the interfaces which where created for the link.
- addSwitch(self, name, cls=None, **params)
- Add switch at runtime.
You probably want to use Experiment.addSwitch as this does some
bookkeeping on nodes etc.
Args:
name: switchname to add. Must not already exist on Worker.
cls: Node class to use.
**params: Additional parameters for cls instanciation.
Returns:
nodename
- addTunnel(self, name, switch, port, intf, **params)
- Add tunnel at runtime.
You probably want to use Experiment.addLink as this does some
bookkeeping on tunnels etc.
Args:
name: tunnelname (must be unique on Worker)
switch: name of switch to which tunnel will be connected.
port: port number to use on switch.
intf: Intf class to use when creating the tunnel.
- configLinkStatus(self, src, dst, status)
- Wrapper for configLinkStatus method on remote mininet.
Used to enable and disable links.
Args:
src: name of source node
dst: name of destination node
status: string {up|down}
- daemonize(self, cmd)
- run command in background and terminate when MaxiNet is shut
down.
- daemonize_script(self, script, args)
- run script from script folder in background and terminate when MaxiNet is shut
down.
Args:
script: Script name to call
args: string of args which will be appended to script name call
- get_file(self, src, dst)
- Transfer file specified by src on worker to dst on Frontend.
Transfers file src to filename or folder dst on Frontend machine
via scp.
Args:
src: string of path to file on Worker
dst: string of path to file or folder on Frontend
- hn(self)
- Get hostname of worker machine.
- ip(self, classifier=None)
- Get public ip adress of worker machine.
Args:
classifier: if multiple ip addresses are configured for a worker
a classifier can be used to hint which ip address should be used.
- put_file(self, src, dst)
- transfer file specified by src on Frontend to dst on worker.
Transfers file src to filename or folder dst on Worker machine
via scp.
Args:
src: string of path to file on Frontend
dst: string of path to file or folder on Worker
- rattr(self, host, name)
- Get attributes of mininet node.
MaxiNet uses this function to get attributes of remote nodes in
NodeWrapper class.
Args:
host: Nodename
Returns:
host.name
WARNING: if the attribute is not serializable this might
crash.
- rpc(self, host, cmd, *params1, **params2)
- Do rpc call to mininet node.
MaxiNet uses this function to do rpc calls on remote nodes in
NodeWrapper class.
Args:
host: Nodename
cmd: Method of node to call.
*params1: Unnamed parameters to call.
**params2: Named parameters to call.
Returns:
Return of host.cmd(*params1,**params2).
WARNING: if returned object is not serializable this might
crash.
- run_cmd(self, cmd)
- run cmd on worker machine and return output.
Args:
cmd: string of program name and arguments to call.
Returns:
Stdout of program call.
- run_cmd_on_host(self, host, cmd)
- Run cmd in context of host and return output.
Args:
host: nodename
cmd: string of program name and arguments to call.
Returns:
Stdout of program call.
- run_script(self, cmd)
- Run MaxiNet script on worker machine and return output.
Args:
cmd: String of name of MaxiNet script and arguments.
Returns:
Stdout of program call.
- set_switch(self, switch)
- Set default switch class.
- start(self, topo, tunnels, controller=None)
- Start mininet instance on worker machine.
Start mininet emulating the in argument topo specified topology.
if controller is not specified mininet will start an own
controller for this net.
Args:
topo: Topology to emulate on this worker.
tunnels: List of tunnels in format: [[tunnelname, switch,
options],].
controller: optional mininet controller class to use in this
network.
- stop(self)
- Stop mininet instance on this worker.
- tunnelX11(self, node)
- Create X11 tunnel from Frontend to node on worker to make
x-forwarding work.
This is used in CLI class to allow calls to wireshark etc.
For each node only one tunnel will be created.
Args:
node: nodename
Returns:
boolean whether tunnel was successfully created.
Data descriptors defined here:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
| |