Issue
I'm having trouble finding info on this. I've seen numerous posts about sharing data between scripts running in the same environment, but not between environments.
I'm using Anaconda, and 16 envs that use several different python versions—python 3.5 to 3.8-for running various scripts/apps. I'm now working on shell scripts and a master python script that will control the launching of other envs, launching scripts, opening OS native apps and automating scripted tasks, and all will be saving, moving, and accessing data from several master folders on the same machine. I suppose the master script will behave like a mini server, and it also lives in its own env.
What I'm trying to figure out is if there's a way to easily pipe data between the environments, or do I have to store things in yaml or JSON files that they can all access (such as passing custom environment variables from the master env to all the others, or one script letting another know when it has completed, or detecting when a specific terminal window has closed)?
I don't need most scripts to share data between each other directly. I need to have feedback sent from the other envs to the master script that will be in control of everything and print output in its terminal window and fire up shell scripts. I need that master script to communicate with the other envs/scripts to give them new tasks and to load up the envs themselves, and so it knows that it's time to do something else—basically event listener and even handler stuff (I assume) along with watch folders. Most of them will run consecutively. Some scripts will be run at the same time from their own shells from the same environment, processing different data, and at times the master script (or an individual script) will pause and await user feedback in the master script terminal.
It might sound more complicated than it really is as most things happen linearly and the data that needs to be shared is small events and variables. One thing starts and finishes, a watch folder sees new files and the master fires up a new env and launches a script to process them, then the next kicks off, then the next, then 4 of the same thing kick off, then an app opens and runs a task and closes, then user is prompted how to proceed to choose which task runs next, etc. etc..
I found these packages which seem promising for some of the tasks:
- python-dotenv and python-dotenv[cli]
- watchdog and watchdog[watchmedo]
- PyYAML
- libyaml
- appscript
- inquirer
- simplejson
- cliclick (mac os Terminal commands for executing keyboard commands and mouse movements)
Creating lambda functions seems to be an easy way for executing os/system commands with python. Right now I just need to find the way to get all of these things talking to the master and vice versa, and sharing some data. One app uses jupyter lab, and I'm not sure if it's easy to automate that from another env/script.
I don't need something with a GUI like jupyter lab, and I honestly don't like its UI, and prefer to use a single master terminal window with some simple user input options.
A point in the right direction would be greatly appreciated.
Solution
Seems the solution here is to use sockets
, create a server
, and create clients
inside the scripts I need to use. Not sure why my searches weren't bringing up sockets, but it's the solution I needed and doesn't require dependencies.
Sockets are built into python so using import sockets
can handle most of what I need.
On top of that, import threading
so multiple threads can be used for clients, and I'm using import system
to send system commands. The threads are being setup as daemons to avoid any trouble if a client doesn't disconnect cleanly.
This has the benefit of running on a local network but can also be used for more complex system to connect to remote clients and servers. Running locally, your server can use its private IPv4 address to send and receive on one machine or on the intranet.
Tutorial
I found this YouTube video by Tech With Tim going through the complete basic setup, which was a big help as I'm completely new to this.
I ended up setting up classes for the server and client because all the functionality I needed would not work right without it. This video was a good way to get my feet wet, but far from what was needed.
I also created a standalone task manager script which is working better than trying to make the server do everything.
Basic server setup
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind(ADDRESS)
After that you need a function to handle client messages and a startup function for the server itself. Using server.listen()
to listen for messages and a while
loop to handle connections and kick off new threads for each client.
I have my master controller script separate, because I found it cumbersome to have the server running in the same window where I needed user input to take place. So I just programmatically launch, size and position a new Terminal window from the master and load up the server script inside it.
Basic client setup
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(ADDRESS)
As with the server, the client will need a function for sending messages. Tim had a nice approach where you send a small header from the client first letting it know how many bytes the incoming message will be before actually sending the message, to ensure things don't get truncated.
Vars
Using environment variables and an env file really helped streamline this setup. I did this using python-dotenv
. Make sure to set the port as an int
in the main script or it might error because it sees it as as string.
As I made my scripts more advanced, I ended up placing all my vars and dicts full of vars in a custom module that I load as needed.
HEADER = 64 # header bytes
PORT = int(os.getenv('PORT')) # make this an int
SERVER = os.getenv('SERVER') # server local IP
ADDRESS = (SERVER, PORT)
FORMAT = 'utf-8'
DISCONNECT = '!killClient'
Answered By - liquidRock Answer Checked By - Dawn Plyler (WPSolving Volunteer)