Sunday, January 23, 2011

Tcl/Tk and Satellite Tracking System

SaVi allows you to simulate satellite orbits and coverage, in two and three dimensions. SaVi project is particularly useful for simulating satellite constellations such as Teledesic and Iridium.

Requirements:
SaVi requires:

  • an ANSI C compiler, e.g. gcc from http://www.gnu.org/software/gcc/


  • tested and builds with egc variants: gcc 2.95, 3.2 and 3.3.1.


  • Tcl and Tk, from http://www.tcl.tk/


  • most recently tested with Tcl/Tk 8.4; use of Tk color picker and load/save file dialogs demands a minimum of Tcl/Tk 7.6/4.2.


  • Tcl/Tk 8.x gives increased performance, and is recommended.


  • If an existing installation of Tcl does not include header files, e.g. /usr/include/tcl.h, you may be able to add these by installing the tcl-devel package.

    SaVi can optionally use:


  • the zlib compression library, from http://www.gzip.org/zlib/


  • most recently tested with zlib 1.2.1. To build with zlib to compress dynamic texturemaps that are sent to Geomview, remove the


  • DNO_ZLIB flag from src/Makefile.


  • the X Window system. SaVi's fisheye display requires X, but can be disabled by passing the -no-X command-line flag to SaVi. X libraries are required to compile SaVi.


  • Geomview, discussed below. Geomview requires an X Window installation.

    SaVi has been successfully compiled and run on the following machines and unix-like operating systems:


  • Intel x86 / Linux (Red Hat 6.x, 7.0, 7.2; Fedora Core 2; Mandrake 9.0)


  • Sun SPARC / Solaris (2.4 and later)


  • SGI / Irix5


  • Intel x86 / Cygwin (1.5.9-1 to .12-1. Use of SaVi's fisheye view currently requires an X display, so the fisheye view is automatically disabled under Cygwin for the non-X Insight Tcl that is supplied with Cygwin. Compiling Tcl/Tk for use of SaVi with Geomview together in an X display window is recommended. You may also need to edit tcl/Makefile to run a custom tclsh executable with older versions of Cygwin.)

    Installation

    For the remainder of this file, we shall refer to the directory originally containing this README file, the root of the SaVi tree, as $SAVI. That is, if you are a user and have unpacked SaVi in your home directory, then $SAVI would be the topmost SaVi directory ~user/saviX.Y.Z that contains this README file that you are reading.

    1.) In $SAVI/src/Makefile_defs.ARCH, (ARCH is linux, sun, irix, or cygwin) you may need to edit some variables to suit your system. If your system is current with recent versions of Tcl and Tk installed, and everything is in its usual place, the generic defs file called "Makefile_defs." may work perfectly, and typing 'make' in SaVi's topmost directory may be
    sufficient to compile the C files in src/ and index the Tcl files in tcl/.

    If not, choose the Makefile_defs. file most suitable for your system and:

    - edit the variables that give the locations of the Tcl/Tk libraries and header include files.

    - edit the variables that point to the X11 libraries and include files.

    - set the CC variable to an ANSI C compiler, e.g. gcc

    2.) Return to the topmost SaVi directory $SAVI. Once in that directory, type e.g. 'make ARCH=linux' (or sun, or irix, or cygwin) in the topmost $SAVI directory. Typing just 'make' in the topmost directory will use the default Makefile_defs. file.

    3.) You may also need to edit the locations of the Tcl and Tk libraries in $SAVI/savi at the TCL_LIBRARY and TK_LIBRARY lines when linking dynamically.

    If running the savi script to launch SaVi generates Tcl or Tk errors, it is often because either the TCL_LIBRARY or TK_LIBRARY lines need to be corrected in that shell wrapper, or because make was not done using the top-level Makefile in the $SAVI directory. SaVi needs $SAVI/tcl/tclIndex to run. That tcl/tclIndex file must be generated by the tcl/Makefile that, like all other subdirectory Makefiles, is called by the top-level master Makefile in the same directory as this README file.

    Using

    As in the previous section, we refer to the directory containing this README file as $SAVI.

    1.) To run SaVi standalone, without needing Geomview, in the $SAVI directory type:

    ./savi

    Or from any other directory,

    $SAVI/savi

    To load in a satellites tcl scriptfile directly, type:

    ./savi filename

    SaVi supports a number of command-line switches, many related to use with Geomview. To see these, type:

    ./savi -help

    2.) To run SaVi as a module within Geomview, for 3D rendering, when in the $SAVI directory start up Geomview:

    geomview

    and then select "SaVi" from Geomview's scrollable list of external modules. Or invoke directly:

    geomview -run savi [flags] < script filename >

    Or from any directory where you can start Geomview, try

    geomview -run savi [flags] < script filename >

    You might invoke a saved one-line script, to pass parameters through to SaVi:

    geomview -run savi [always-on flags] $*

    3.) To make SaVi accessible to other users, you can copy the "savi" script in $SAVI to some directory in other users' search paths such as /usr/local/bin, so they needn't add SaVi's own directory to their own path. If you do, edit the "savi" script, inserting the full path name of $SAVI as indicated in the script itself:

    # If you copy this script from the SaVi installation and run it elsewhere,
    # then you should uncomment the following line:
    # SAVI=/usr/local/savi
    # and replace /usr/local/savi with the location of
    # your SaVi installation.

    You can also make SaVi accessible from Geomview's scrollable list of external modules. Assuming Geomview is installed in /usr/local/Geomview, say:

    cd /usr/local/Geomview/modules/sgi

    Create a file here called ".geomview-savi" containing e.g.:

    (emodule-define "SaVi" "/usr/local/savi1.2.6/savi")

    where the right-hand side is the absolute path name for the savi script.

    What's New in This Release:


  • makes SaVi easier to build on Linux systems, by adjusting the Linux makefile definitions. See the RELEASE-NOTE file.


    SaVi allows you to simulate satellite orbits and coverage, in two and three dimensions. SaVi is particularly useful for simulating satellite constellations such as Iridium and Globalstar. SaVi runs on Microsoft Windows (under Cygwin), on Macintosh OS X, Linux and Unix. To get started with SaVi, download SaVi 1.4.3 (from UK mirror).
    Read the SaVi user manual and learn about satellite constellations in a tutorial using SaVi. Then take a look at the optional but useful Geomview, which SaVi can use for 3D rendering.
    Further information on SaVi is available. SaVi is supported via the SaVi users mailing list.
    SaVi is developed at SourceForge. There is a SaVi developers mailing list.

    Tcl and Wirless

    Tcl script for implementing DSR routing protocol in wireless network

    Description:

    This network consists of 3 nodes. After creating the nam file and trace file, we set up topography object. Set node_ ($i) [$ns node] is used to create the nodes. Here we set the initial position for the every node by using initial_node_pos. After that $Val (stop) is used to tell to nodes when the simulation ends. the nodes have Tcp connection. A "tcp" agent is attached to Client1, and a connection is established to a tcp "sink" agent attached to node0 and node1. As default, the maximum size of a packet that a "tcp" agent can generate is 1KByte. A tcp "sink" agent generates and sends ACK packets to the sender (tcp agent) and frees the received packets. The ftp is set to start at 10.0 sec and stop at 150.0 sec. Here we were using DSR routing protocol.
    File name: “Dsr.tcl”

    #-------Event scheduler object creation--------#
    set ns              [new Simulator]
    #Creating trace file and nam file
    set tracefd       [open dsr.tr w]
    set windowVsTime2 [open win.tr w]
    set namtrace      [open dsr.nam w]   
    $ns trace-all $tracefd
    $ns namtrace-all-wireless $namtrace $val(x) $val(y)
    # set up topography object
    set topo       [new Topography]
    $topo load_flatgrid $val(x) $val(y)
    create-god $val(nn)
    # configure the nodes
            $ns node-config -adhocRouting $val(rp) \
                       -llType $val(ll) \
                       -macType $val(mac) \
                       -ifqType $val(ifq) \
                       -ifqLen $val(ifqlen) \
                       -antType $val(ant) \
                       -propType $val(prop) \
                       -phyType $val(netif) \
                       -channelType $val(chan) \
                       -topoInstance $topo \
                       -agentTrace ON \
                       -routerTrace ON \
                       -macTrace OFF \
                       -movementTrace ON
                     
          for {set i 0} {$i < $val(nn) } { incr i } {
                set node_($i) [$ns node]     
          }
    # Provide initial location of mobilenodes
    $node_(0) set X_ 5.0
    $node_(0) set Y_ 5.0
    $node_(0) set Z_ 0.0
    $node_(1) set X_ 490.0
    $node_(1) set Y_ 285.0
    $node_(1) set Z_ 0.0
    $node_(2) set X_ 150.0
    $node_(2) set Y_ 240.0
    $node_(2) set Z_ 0.0
    # Generation of movements
    $ns at 10.0 "$node_(0) setdest 250.0 250.0 3.0"
    $ns at 15.0 "$node_(1) setdest 45.0 285.0 5.0"
    $ns at 110.0 "$node_(0) setdest 480.0 300.0 5.0"
    # Set a TCP connection between node_(0) and node_(1)
    set tcp [new Agent/TCP/Newreno]
    $tcp set class_ 2
    set sink [new Agent/TCPSink]
    $ns attach-agent $node_(0) $tcp
    $ns attach-agent $node_(1) $sink
    $ns connect $tcp $sink
    set ftp [new Application/FTP]
    $ftp attach-agent $tcp
    $ns at 10.0 "$ftp start"
    # Printing the window size
    proc plotWindow {tcpSource file} {
    global ns
    set time 0.01
    set now [$ns now]
    set cwnd [$tcpSource set cwnd_]
    puts $file "$now $cwnd"
    $ns at [expr $now+$time] "plotWindow $tcpSource $file" }
    $ns at 10.1 "plotWindow $tcp $windowVsTime2" 
    # Define node initial position in nam
    for {set i 0} {$i < $val(nn)} { incr i } {
    # 30 defines the node size for nam
    $ns initial_node_pos $node_($i) 30
    }
    # Telling nodes when the simulation ends
    for {set i 0} {$i < $val(nn) } { incr i } {
        $ns at $val(stop) "$node_($i) reset";
    }
    # ending nam and the simulation
    $ns at $val(stop) "$ns nam-end-wireless $val(stop)"
    $ns at $val(stop) "stop"
    $ns at 150.01 "puts \"end simulation\" ; $ns halt"
    proc stop {} {
        global ns tracefd namtrace
        $ns flush-trace
        close $tracefd
        close $namtrace
    exec nam dsr.nam &
    exit 0
    }
    $ns run
    # How to run the program:
    $ns dsr.tcl
    #snapshot of the program:


    A Wireless Network


    In this section, we are going to develop a TCL script for NS which simulates a simple wireless network.  We are going to learn how a wireless network  functions, how data is sent from one node to another, and how CSMA works.  (If you are not familiar with how to create a simple network, please refer to this link ... Your First Network)

    We are going to simulate a very simple 2-node wireless scenario.  The topology consists of two mobile nodes, node_(0) and node_(1).  The mobile nodes move about within an area whose boundary is defined in this example as 500m x 500m. The nodes start out initially at two opposite ends of the boundary.  Then they move towards each other in the first half of the simulation and again move away for the second half.  A TCP connection is setup between the two mobile nodes.  Packets are exchanged between the nodes as they come within hearing range of one another. As they move away, packets start getting dropped.
    You will need to define options to use for the rest of the script and the two other tutorials but with slight changes.  You can view it here.
    Next we go to the main part of the program and start by creating an instance of the simulator,
    set ns_    [new Simulator]

    Then setup trace support by opening file simple.tr and call the procedure trace-all {} as follows:
    set tracefd     [open simple.tr w]
    $ns_ trace-all $tracefd           
    
    set namtrace    [open simple-wireless.nam w]

    Next create a topology object that keeps track of movements of mobile nodes within the topological boundary.
    set topo [new Topography]

    We had earlier mentioned that mobile nodes move within a topology of 500m x 500m. We provide the topography object with x and y co-ordinates of the boundary, (x=500, y=500) :
    $topo load_flatgrid 500 500
    $ns_ namtrace-all-wireless $namtrace 500 500


    The topography is broken up into grids and the default value of grid resolution is 1. A different value can be passed as a third parameter to load_flatgrid {} above.
    Next we create the object God, as follows:
    create-god $val(nn)

    Quoted from CMU document on god, "God (General Operations Director) is the object that is used to store global information about the state of the environment, network or nodes that an omniscient observer would have, but that should not be made known to any participant in the simulation." Currently, God object stores the total number of mobile nodes and a table of shortest number of hops required to reach from one node to another. The next hop information is normally loaded into god object from movement pattern files, before simulation begins, since calculating this on the fly during simulation runs can be quite time consuming. However, in order to keep this example simple we avoid using movement pattern files and thus do not provide God with next hop information. The usage of movement pattern files and feeding of next hop info to God shall be shown in the example in the next sub-section.

    Create the specified number of mobile nodes [$val(nn)] and "attach" them to the channel.  The configuration of the nodes can be view here.
    Next we create the 2 mobile nodes as follows:
    for {set i 0} {$i < $val(nn) } {incr i} {
                    set node_($i) [$ns_ node ]
                    $node_($i) random-motion 0       ;# disable random motion
            }    

    The random-motion for nodes is disabled here, as we are going to provide node position and movement (speed & direction) directives next

    Now that we have created mobile nodes, we need to give them a position to start with,
    #
    # Provide initial (X,Y, for now Z=0) co-ordinates for node_(0) and node_(1)
    #
    $node_(0) set X_ 5.0
    $node_(0) set Y_ 2.0
    $node_(0) set Z_ 0.0
    
    $node_(1) set X_ 390.0
    $node_(1) set Y_ 385.0
    $node_(1) set Z_ 0.0

    Node0 has a starting position of (5,2) while Node1 starts off at location (390,385).

    Next produce some node movements,
    #
    # Node_(1) starts to move towards node_(0)
    #
    $ns_ at 50.0 "$node_(1) setdest 25.0 20.0 15.0"
    $ns_ at 10.0 "$node_(0) setdest 20.0 18.0 1.0"
    
    # Node_(1) then starts to move away from node_(0)
    $ns_ at 100.0 "$node_(1) setdest 490.0 480.0 15.0" 

    $ns_ at 50.0 "$node_(1) setdest 25.0 20.0 15.0" means at time 50.0s, node1 starts to move towards the destination (x=25,y=20) at a speed of 15m/s. This API is used to change direction and speed of movement of the mobile nodes.

    Next setup traffic flow between the two nodes as follows:
    # TCP connections between node_(0) and node_(1)
    
    set tcp [new Agent/TCP]
    $tcp set class_ 2
    set sink [new Agent/TCPSink]
    $ns_ attach-agent $node_(0) $tcp
    $ns_ attach-agent $node_(1) $sink
    $ns_ connect $tcp $sink
    set ftp [new Application/FTP]
    $ftp attach-agent $tcp
    $ns_ at 10.0 "$ftp start" 

    This sets up a TCP connection between the two nodes with a TCP source on node0.

    Then we need to define stop time when the simulation ends and tell mobile nodes to reset which actually resets their internal network components,
    #
    # Tell nodes when the simulation ends
    #
    for {set i 0} {$i < $val(nn) } {incr i} {
        $ns_ at 150.0 "$node_($i) reset";
    }
    $ns_ at 150.0001 "stop"
    $ns_ at 150.0002 "puts \"NS EXITING...\" ; $ns_ halt"
    proc stop {} {
        global ns_ tracefd
    $ns_ flush-trace
        close $tracefd
        exec nam simple-wireless.nam  &
        exit 0
    }

    At time 150.0s, the simulation shall stop. The nodes are reset at that time and the "$ns_ halt" is called at 150.0002s, a little later after resetting the nodes. The procedure stop{} is called to flush out traces and close the trace file.

    And finally the command to start the simulation,
    puts "Starting Simulation..."
    $ns_ run

    Save the file simple-wireless.tcl (for a format of how the TCL script should look, view it here).  Next run the simulation in the usual way (type at prompt: "ns simple-wireless.tcl" ).

    Tcl and Expect

    Introduction

    Expect is a tool for automating interactive applications such as telnet, ftp, passwd, fsck, rlogin, tip, etc. Expect really makes this stuff trivial. Expect is also useful for testing these same applications. And by adding Tk, you can also wrap interactive applications in X11 GUIs.
    Expect can make easy all sorts of tasks that are prohibitively difficult with anything else. You will find that Expect is an absolutely invaluable tool - using it, you will be able to automate tasks that you've never even thought of before - and you'll be able to do this automation quickly and easily.

    Expect was conceived of in September, 1987. The bulk of version 2 was designed and written between January and April, 1990. Minor evolution occurred after that until Tcl 6.0 was released. At that time (October, 1991) approximately half of Expect was rewritten for version 3. See the HISTORY file for more information. The HISTORY file is included with the Expect distribution.
    Around January 1993, an alpha version of Expect 4 was introduced. This included Tk support as well as a large number of enhancements. A few changes were made to the user interface itself, which is why the major version number was changed. A production version of Expect 4 was released in August 1993.
    In October 1993, an alpha version of Expect 5 was released to match Tcl 7.0. A large number of enhancements were made, including some changes to the user interface itself, which is why the major version number was changed (again). The production version of Expect 5 was released in March '94.
    In the summer of 1999, substantial rewriting of Expect was done in order to support Tcl 8.2. (Expect was never ported to 8.1 as it contained fundamental deficiencies.) This included the creation of an exp-channel driver and object support in order to take advantage of the new regexp engine and UTF/Unicode. The user interface is highly but not entirely backward compatible. See the NEWS file in the distribution for more detail.
    There are important differences between Expect 3, 4, and 5. See the CHANGES.* files in the distribution if you want to read about the differences. Expect 5.30 and earlier versions have ceased development and are not supported. However, the old code is available from http://expect.nist.gov/old.
    The Expect book became available in January '95. It describes Expect 5 as it is today, rather than how Expect 5 was when it was originally released. Thus, if you have not upgraded Expect since before getting the book, you should upgrade now.

    Historical notes on Tcl and Tk according to John Ousterhout

    I got the idea for Tcl while on sabbatical leave at DEC's Western Research Laboratory in the fall of 1987. I started actually implementing it when I got back to Berkeley in the spring of 1988; by summer of that year it was in use in some internal applications of ours, but there was no Tk. The first external releases of Tcl were in 1989, I believe. I started implementing Tk in 1989, and the first release of Tk was in 1991.


    In the design of automated systems in Expect, one of the more difficult hurdles many programmers encounter is ensuring communication with ill-behaved connections and remote terminals. The send_expect procedure detailed in this article provides a means of ensuring communication with remote systems and handles editing and rebroadcast of the command line. Where a programmer would usually send a command line and then expect the echo from the remote system, this procedure replaces those lines of code and provides the most reliable interface I have come across.  Features of this interface include:
    • Guarantees transmission via remote system echo
    • Tolerates remote terminal control codes and garbage characters in the echo of the sent string
    • Persistence of attempts and hierarchy of methods before declaring a failure
    • Interactively edits and retransmits command lines that cannot be verified
    • Maintains its own moving-window diagnostics files, so they are small and directly associated with the errors
    Communication with local processes (i.e. those running on the same workstation as the expect process) is typically not problematic and does not require the solutions detailed in this article.  External processes, however, can create a number of problems that may or may not affect communication, but will affect an automated system's ability to determine the success of the communication.  In cases where it is corrupted, it is not always immediately obvious: a corrupted command may trigger an error message, but data which has been corrupted may still be considered valid and the error would not show up immediately, and may cause a variety of problems.  This is why it is necessary to ensure that the entire string that is transmitted is properly received echoed by the remote system.
    The basic idea of this interface is to send the command string except for its terminating character (usually, a carriage return) and look at the echo from the remote system.  If the two can be matched using the regular expressions in the expect clauses, then the terminating character is sent and transmission is considered successful. If success cannot be determined, the command line is cleared instead of being sent, and alternative transmission modes are used.
    In many cases, nothing more than expecting the exact echo of the string is sufficient.  If you're reading this article, though, I suspect that you've encountered some of the problems I have when programming in Expect, and you're looking for the solution here.  If you're just reading out of interest, the problems arise when automating a session on a machine off in a lab, or on the other side of the world.  Strange characters pop up over the connection, and the terminal you're connected to does weird things with its echo, but everything is working.  It becomes very difficult to determine if what was sent was properly received when you have noise on the connection, terminal control codes inserted in the echo, and even server timeouts between the automation program and the remote session.  This interface survives all of that, and if it can't successfully transmit the string, it means that the connection to the remote system has been lost. 
    The code provided in this article is executable, but needs to be incorporated into any system in which it is to be used.  Ordinarily, system-dependent commands need to be added based on the needs of the target system.  Also, this code uses simple calls to the puts command to output status messages - these should be changed to use whatever logging mechanism is used by the rest of the system.  A final caveat, and I can't emphasize this enough: always wear eye protection. 

    The procedures provided in this article are:
    The interface is initialized with the send_expect_init procedure, which sets up all the globals required by the other procedures.  See the section on controlling the behavior of the interface for an explanation of the parameters.  The send_expect_init procedure is run once, at the beginning of execution (before the interface is to be used).  It may be run a second time to restore settings, if necessary. 
    The send_only procedure is a wrapper for the exp_send command, and is used by send_expect to transmit strings.  The only time this procedure is called directly is for strings that are not echoed, such as passwords, and multi-byte character constants, such as the telnet break character (control-]).
    The send_expect procedure is the actual interface between the automated system and its remote processes, and is detailed in the next section.
    Finally, the send_expect_report procedure is used at the end of execution to output the statistics of the interface for debugging.  This procedure may also be run during execution, if incremental reports are needed.

    Using The send_expect Procedure
    Once the interface has been initialized using send_expect_init, and a process has been spawned, it is ready to be used with the syntax:
    send_expect id command;
    where 
    id = the spawn id of the session on which to send the command, and 
    command = the entire command string including the terminating carriage-return, if any. 
    This syntax, and the implementation of the expression-action lists, support multiple-session applications. 
    The examples provided in this article are simple examples but with more attention to detail, and where warranted a complete implementation is provided as an example.  The send_expect procedure usually replaces only two lines of code in an existing system.
    The full syntax for properly using the interface is actually:
      if { [send_expect $id $command] != 0} {
       ## handle your error here
      }

    The interface uses four different transmission modes, in order:

  • 1) send the entire string and hope for the best (fastest, but least reliable)


  • 2) send the entire string using the send_slow list


  • 3) send the string in blocks of eight characters


  • 4) send the string one character at a time (slowest, but most reliable)


  • If a mode fails, the command line is cleared by sending the standard control-U, the expect buffer is cleared, and the next mode is tried.  Each mode except the last one can also have a failure tolerance set, using:
    sendGlobals(ModeXFailMax),   where X is either 1,2 or 3. 
    If this max value is set to a positive number, once the failures for that mode exceeds this value, it is no longer used.  If it is set to 0, then each mode is tried for each transmission, regardless of the number of failures.  Each of the modes uses the send_only procedure as a wrapper for exp_send.  If this procedure returns an error, it most likely means that the connection was lost, and the spawn id is checked to see if the session is still active.  The error is returned to send_expect, which in turn returns an error to the calling procedure.
    For local processes and robust remote connections, mode 1 is usually sufficient.  If the remote system is a bit slow, mode 2 may be required.  Mode 3 has proven invaluable when connected to routers and clusters which provide rudimentary terminal control.  Mode 4 is rarely required, but acts as a backup to mode 3.

    Controlling The Behavior Of The Interface:
    The sendGlobals array contains all of the parameters used by the interface, and is initialized with send_expect_init.  It may be modified at runtime to control how the interface works.  This section will cover the meanings of these parameters and how they may be modified.
    The failure limit elements (Mode1FailMax, Mode2FailMax, and Mode3FailMax) determine how many failures are permitted for modes 1, 2 and 3 (respectively).  A value of zero disables this limitation, and any positive integer sets the maximum number of failures for that mode before it is no longer used by the interface.  There is no failure limit for the last mode.
    The element useMode allows the system to determine which transmission mode should be used first, so that the less reliable modes (the first and second) can be bypassed.  Allowable values for this parameter are 1, 2, 3, or 4.  Invalid values will be replaced by the default mode (1).
    If transmission errors are not considered fatal, the sendErrorSeverity element may be specified to a more tolerant value.  Note that this parameter is not used internally, so if the automated system does not access this value, it won't affect the interface.
    The kill element defines the command line kill character, which is defaulted to the Gnu-standard control-U. 
    The diagFile parameter names the temporary internal diagnostics file (generated from exp_internal).
    The logDiags allows disabling of all diagnostics output for faster execution, but be forewarned that disabling this feature well make debugging much more difficult.
    The interval and delay elements represent the two items in the send_slow list, which is used by the second and third modes. 
    For experimentation purposes, it is recommended that these parameters be modified by the automated system at runtime, rather than directly editing the defaults in the initialization procedure.  Once valid settings are found the defaults may be changed to reflect them

    Tcl Socket Programming

    Example Code - Socket server and client

    This is code from a simple echo server and client posted to the Newsgroup by Ray Tripamer (ray@asci.com). I've commented it to make it a little clearer why its doing what it is and to serve as something of an example of what you have to do to implement socket servers and clients in Tcl.

    Echo Client

    This implements a client that opens a server connection, sends messages from stdin, receives server replies and sends them to stdout.
    #!/usr/local/bin/tclsh7.5
    
    # Read data from a channel (the server socket) and put it to stdout
    # this implements receiving and handling (viewing) a server reply 
    proc read_sock {sock} {
      set l [gets $sock]
      puts stdout "ServerReply:$l"
    }
    
    # Read a line of text from stdin and send it to the echoserver socket,
    # on eof stdin closedown the echoserver client socket connection
    # this implements sending a message to the Server.
    proc read_stdin {wsock} {
      global  eventLoop
      set l [gets stdin]
      if {[eof stdin]} {
        close $wsock             ;# close the socket client connection
        set eventLoop "done"     ;# terminate the vwait (eventloop)
      } else {
        puts $wsock $l           ;# send the data to the server
      }
    }
    
    # open the connection to the echo server...
    set eshost "scoda"
    set esport 9999
    
    # this is a synchronous connection: 
    # The command does not return until the server responds to the 
    #  connection request
    set esvrSock [socket $eshost $esport]
    
    #if {[eof $esvrSock]} { # connection closed .. abort }
    
    # Setup monitoring on the socket so that when there is data to be 
    # read the proc "read_sock" is called
    fileevent $esvrSock readable [list read_sock $esvrSock]
    
    # configure channel modes
    # ensure the socket is line buffered so we can get a line of text 
    # at a time (Cos thats what the server expects)...
    # Depending on your needs you may also want this unbuffered so 
    # you don't block in reading a chunk larger than has been fed 
    #  into the socket
    # i.e fconfigure $esvrSock -blocking off
    
    fconfigure $esvrSock -buffering line
    
    # set up our keyboard read event handler: 
    #   Vector stdin data to the socket
    fileevent stdin readable [list read_stdin $esvrSock]
    
    # message indicating connection accepted and we're ready to go 
    puts "EchoServerClient Connected to echo server"
    puts "...what you type should be echoed."
    
    # wait for and handle either socket or stdin events...
    vwait eventLoop
    
    puts "Client Finished"
    
    Another option is to do an asynchronous client connection
    
    set esvrSock [socket -async $eshost $esport]
    
    # .... do whatever that we can't connect synchronously... 
    
    # resync with the connection, 
    #Socket becomes writable when connection available
    fileevent $esvrSock writable { set connect 1 }
    vwait connect   
        # will 'block' here till connection up (or eof or error)
    
    fileevent $esvrSock writable {}    ;# remove previous handler
    
    if {[eof $esvrSock]} { # connection closed .. abort }
    
    # set translation, buffering  and/or blocking modes
    fconfigure $esvrSock -translation {auto crlf} -buffering line
        ...
    
    

    Echo Server

    Server that reflects its client messages back to the source
    
    #!/usr/local/bin/tclsh7.5
    
    set svcPort 9999
    
    # Implement the service
    # This example just writes the info back to the client...
    proc doService {sock msg} {
        # puts $sock "echosrv:$l"
         puts $sock "$l"
    }
    
    # Handles the input from the client and  client shutdown
    proc  svcHandler {sock} {
      set l [gets $sock]    ;# get the client packet
      if {[eof $sock]} {    ;# client gone or finished
         close $sock        ;# release the servers client channel
      } else {
        doService $sock $l
      }
    }
    
    # Accept-Connection handler for Server. 
    # called When client makes a connection to the server
    # Its passed the channel we're to communicate with the client on, 
    # The address of the client and the port we're using
    #
    # Setup a handler for (incoming) communication on 
    # the client channel - send connection Reply and log connection
    proc accept {sock addr port} {
      
      # if {[badConnect $addr]} {
      #     close $sock
      #     return
      # }
    
      # Setup handler for future communication on client socket
      fileevent $sock readable [list svcHandler $sock]
    
      # Read client input in lines, disable blocking I/O
      fconfigure $sock -buffering line -blocking 0
    
      # Send Acceptance string to client
      puts $sock "$addr:$port, You are connected to the echo server."
      puts $sock "It is now [exec date]"
    
      # log the connection
      puts "Accepted connection from $addr at [exec date]"
    }
    
    
    # Create a server socket on port $svcPort. 
    # Call proc accept when a client attempts a connection.
    socket -server accept $svcPort
    vwait events    ;# handle events till variable events is set
    
    
    

    Background

    Heres some background from Jan Wieck (wieck@sapserv.debis.de) of the concepts involved with Socket library calls generally and how that maps into Tcl. It may help illuminate some of the above.
    Socket below means STREAM socket in AF_INET (Internet domain)
    What's a socket? A socket is bidirectional communication channel. Bidirectional, because it allows sending and receiving. A socket is identified from the process point of view by a handle (file descriptor in UNIX). On the network side it's identified by a network host address AND a port number. It's created by the system call socket(2). When socket(2) returns a valid handle (file descriptor), it has already assigned the network address and a dynamically allocated port number that isn't in use by another socket on your local system. This combination of host address and port number is called the socket name.
    To connect two sockets, to form something like a bidirectional pipe, a program must call connect(2) with the socket name of the remote socket given in sockaddr. Because it's very difficult, to guess the dynamic port number, there is way to change the 'name' of a socket. The system call to do that is bind(2). Bind has some restrictions. The port number you want must not be in use by any other socket on the local system. Thus, it's guaranteed, that all socket names all over the world are unique and name only one single handle in a process (as long as all the host addresses are unique). Only the superuser can bind to a port number below 1024 as these are allocated as 'system' services and we don't want to allow spoofing of these..
    Since normally a server is sitting somewhere around, waiting for a client that wants to connect, it's usual to give the server socket a fixed name. Fixed name in this case means, that the server will create a socket and bind it to the current host address and a fixed port number. The file /etc/services is a list of hopefully all the port numbers for standard services
    So let's fire up the server.
    • The server process first calls socket(2) to create a socket with a partly random name.
    • the server calls bind(2) to give the socket a fixed name, that will be used later by the clients.
    • Third, the server tells the kernel, that it is willing to accept incoming connection requests by calling listen(2).
    In Tcl, all the three steps are performed if you issue
    socket -server {command} port
    
    Port must be the port number the client will use in it's connection request (see below).
    What we now have is a server socket. Back to C. This socket becomes readable when a client wants to connect. But the readability in this case doesn't mean that you can read data from it. It's a hack to tell the server process that there's someone knocking at the door. So let's take a look at the client.
    A client process too creates a socket. But it doesn't care about the socket name (except for special purposes that deal with security). So it leaves it untouched and directly tries to establish the connection using connect(2). The connect(2) system call needs the remote socket name of the server socket. The server explicitly 'named' it's socket (host+port), so this isn't any problem.
    At this moment, the server socket becomes readable. The server now calls accept(2) on it's socket. accept(2) creates a new socket, again with a dynamically assigned port number. This new socket and the socket in the client form the bidirectional pipe. accept(2) returns the handle (file descriptor) of the new socket and fills a buffer with the socket name of the clients socket.
    In Tcl, the two steps for the client (calling socket(2) and connect(2)) are performed if you issue
    socket host port
    
    What you might miss in Tcl is the accept step. But it's there. Because accept(2) normally blocks until there is at least one client that wants to connect, Tcl enforces that you go into the event driven world. After you created the server socket, Tcl controls the readability of the server socket in it's event loop. If it becomes readable, i.e. a client wants to connect, Tcl does the accept(2) and calls command with all the information given by accept(2). So command will be invoked one time for every client that connects to your server. But this requires, that the server get's back into the event loop. So you have to switch the communication socket in the server (that one given as argument to command) to nonblocking I/O and do everything in fileevent handlers.
    It is important, that a server process is completely controlled by the event loop (default for Tk, using vwait in Tcl).