Forgotten Technologies: CGI | TechPlanet
Inspired by the comments: people have a strong association between Perl as a language and CGI as a technology used in the early days of the web.
It makes sense: at that time, Perl was one of the few widespread scripting languages, and naturally, it was more convenient to write CGI scripts in Perl than, for example, in shell. However, this doesn’t mean that one was nailed to the other.
In general, the technology was good in its own way: you installed a web server (usually Apache), configured a directory from which scripts could be executed — and you could run whatever you wanted.
Unlike later complex web systems, this was literally running a program on the server, just as you would run it through a terminal or console. Everything passed to the program via HTTP was received through STDIN, and everything the program output to STDOUT was sent to the client’s browser.
Essentially, it was a regular non-interactive console program, a pure REST API — the server always starts the program from scratch, and the program works with what it was given.
The downside was that launching the program always took some time, especially if it was written in a scripting language that required interpretation. For one-off requests, this wasn’t critical, but when the program takes 0.5 seconds to start and you have 100 requests per second — things start to slow down a bit.
This is why FastCGI was later introduced, where the program was “pre-launched” and waited for data, and eventually, we moved to built-in servers with multithreading.
Now, let me show you how this can be used today:
As I mentioned in a previous article, I set up a “universal access block”: a single-board computer with a routing proxy server that separates the sheep from the goats, the flies from the cutlets, and the sick from the healthy.
But, like any program, all these features may require a restart. Moreover, home internet is inherently unstable — a tree falls on the wires, an excavator digs up something wrong, and suddenly the cat pictures stop loading. You need to understand what happened.
Of course, you can always SSH in, type a few commands in the console, and find out, but can we make it easier? Writing a separate web system to check everything seems like overkill.
So, the task:
- There are several programs running in the background.
- There are methods to check if the program is running or “hung.”
- We need to create a simple web page with the current status and the ability to restart the programs.
- Since this is deep inside a local network, no authorization is required, but no one should be able to break anything with clumsy hands.
We install Apache on this server (Nginx can’t handle simple CGI without some hoop-jumping):
apt install apache2
After installation, everything works “out of the box,” but we need to enable the CGI module, which is disabled by default:
cd /etc/apache2/mods-enabled
ln -s ../mods-available/cgi.load .
/etc/init.d/apache2 restart
Done. By default, CGI scripts should be located in /usr/lib/cgi-bin/
, and accessible via the web as /cgi-bin/*.cgi
. Of course, this can be changed, especially if the outdated “cgi-bin” bothers you — but we won’t see it now. And, of course, this directory is empty initially.
We create the first file, env.cgi
:
#!/bin/sh
echo "Content-Type: text/html"
echo ""
echo "<pre>"
env
echo "</pre>"
id
chmod 755 env.cgi
We access the server via a browser:
http://xx.xx.xx.xx/cgi-bin/env.cgi
Output:
GATEWAY_INTERFACE=CGI/1.1
REMOTE_ADDR=XX.XX.XX.XX
QUERY_STRING=
HTTP_USER_AGENT=Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36
DOCUMENT_ROOT=/var/www/html
REMOTE_PORT=40472
HTTP_UPGRADE_INSECURE_REQUESTS=1
HTTP_ACCEPT=text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7
SERVER_SIGNATURE=
Apache/2.4.62 (Debian) Server at 10.1.0.4 Port 80
....
uid=33(www-data) gid=33(www-data) groups=33(www-data)
No complex programs, new or old languages, frameworks, nothing — just a simple shell script.
Here, the interesting lines are QUERY_STRING=
and the bottom line uid=33(www-data) gid=33(www-data) groups=33(www-data)
.
The QUERY_STRING=
contains the string that will be entered in the address after the script name:
http://XX.XX.XX.XX/cgi-bin/env.cgi?blablabla -> QUERY_STRING=blablabla
Ideally, this should be URL-encoded GET parameters, but in practice, it doesn’t matter what string is there, as long as it doesn’t violate character conventions. In this case, we can send a single string indicating which program our script should work with.
The bottom line, the result of the id
command, shows that the web server is running under the www-data
user.
To check the internet and various proxy options, we can use the standard curl
:
A direct request will give us the external IP:
curl http://v4v6.ipv6-test.com/api/myip.php --silent
A request through a SOCKS5 proxy will give us the external IP visible through the proxy, and also check the proxy’s functionality:
curl -x socks5h://127.0.0.1:1080 http://v4v6.ipv6-test.com/api/myip.php --silent
Others are similar.
We could do all the checks in one script and return the results in JSON format, but some requests may take a long time, and the script would wait for all of them. This isn’t very convenient, so the script will be one, but what it checks will depend on the query parameter.
Thus, we get the script check_one.cgi
:
#!/bin/sh
q=$QUERY_STRING
# remove everything else
unset $(env | cut -d= -f1)
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin; export PATH
# mandatory header
echo "Content-Type: application/json"
echo ""
# add "x" to the parameter to avoid empty strings
if [ "x$q" = "xxray" ] ; then
# here we request the server's response, it will give us the IP address
INETIP=$( curl http://v4v6.ipv6-test.com/api/myip.php --silent )
PROXYIP=$( curl -x socks5h://127.0.0.1:1080 http://v4v6.ipv6-test.com/api/myip.php --silent )
echo ""extip":"$INETIP","proxyip":"$PROXYIP""
elif [ "x$q" = "xi2p" ] ; then
X=1
# here and below - the fact of the response is important, so only headers and error code
curl -x socks5h://127.0.0.1:4447 http://flibusta.i2p -I --silent -o /dev/null
if [ $? -ne 0 ] ; then
X=0
fi
echo ""i2p":$X";
elif [ "x$q" = "xnodpi" ] ; then
X=1
curl -x socks5h://127.0.0.1:1081 https://jnn-pa.googleapis.com -I --silent -o /dev/null
if [ $? -ne 0 ] ; then
X=0
fi
echo ""nodpi":$X";
elif [ "x$q" = "xproxy" ] ; then
X=1
curl -x socks5h://127.0.0.1:6007 http://v4v6.ipv6-test.com/api/myip.php -I --silent -o /dev/null
if [ $? -ne 0 ] ; then
X=0
fi
echo ""proxy":$X";
else
# to not forget what options are available
echo ""options":["proxy","xray","i2p","nodpi"]"
fi
Now, by specifying certain options, we can get a response on whether a particular route is working or not.
Restarting services can be done like this:
Since the web server runs under the www-data
user, it doesn’t have the rights to restart everything we want, and we won’t give it those rights. Instead, we’ll run the services specifically under it, and in such a way that they restart themselves. We create typical scripts like:
#!/bin/sh
#
exec > /dev/null
exec 2>&1
cd /tmp
while [ 1 ] ; do
/usr/local/etc/xray/xray -c /usr/local/etc/xray/config.json
done
The xray
program is not launched in the background, so if the process is killed or it dies on its own, the infinite loop in the script will restart it. This way, we protect against unexpected crashes, and in case of a hang, it’s enough to kill the hung process — there are enough rights for that.
The same applies to other processes.
Now we need to start the processes. For this, it’s convenient to use the /etc/rc.local
script (in this case, it works; if not, add it somewhere else, like /etc/init.d
):
su www-data -s /bin/sh -c 'setsid /usr/local/bin/start_xray &'
The -s /bin/sh
parameter is needed because the www-data
user doesn’t have its own shell (see vipw
). The -c
parameter runs the command, in this case, setsid
, which launches the startup script (a bit redundant) as a daemon.
Others are similar. After a computer restart, the startup scripts run, which launch the programs, which now run under the user, and in case of their crash or being killed, they automatically restart.
The only nuance with i2pd
is that it looks for configs in the home directory, in ~/.i2pd
, and for this user, it’s the /var/www
directory (again, see vipw
), so the settings should be there, and there should be write permissions to the config directory.
Now, we write a killer script:
#!/bin/sh
q=$QUERY_STRING
# remove everything else
unset $(env | cut -d= -f1)
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin; export PATH
echo "Content-Type: text/html"
echo ""
pid=
if [ "x$q" = "xxray" ] ; then
pid=$(ps ax| grep -v grep | grep 'xray/xray' | awk ' print $1 ')
if [ -n "$pid" ] ; then
echo $pid
kill $pid
sleep 3
fi
elif [ "x$q" = "xi2p" ] ; then
pid=$(ps ax| grep -v grep | grep '/i2pd' | awk ' print $1 ')
if [ -n "$pid" ] ; then
echo $pid
kill $pid
sleep 3
fi
elif [ "x$q" = "xnodpi" ] ; then
pid=$(ps ax| grep -v grep | grep '/ciadpi' | awk ' print $1 ')
if [ -n "$pid" ] ; then
echo $pid
kill $pid
sleep 3
fi
else
echo ""options":["xray","i2p","nodpi"]"
fi
ps ax
gives a list of processes, the first grep
removes the grep
itself from the output, the second grep
looks for the desired program, awk
extracts the PID, kill
kills it, and sleep
waits a bit for the process to die.
In general, that’s almost everything. Now, let’s create the page:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>HTML gate</title>
<style type="text/css">
body, html
margin: 0;
padding: 0;
font-family: Arial, sans-serif;
height: 100%;
.placeholder
display:inline-block;
width:10em;
.txt_error
color:#ff0000;
font-weight:bold;
.txt_ok
color:#00ff00;
font-weight:bold;
.hero
height: 100vh;
background-image: url('bg-image.jpeg');
background-color: black;
background-position: center;
background-repeat: no-repeat;
background-size:cover;
color: white;
box-sizing: border-box;
#status
padding:2em;
#buttons
display:flex;
align-items: center;
justify-content: center;
gap: 5em;
padding-top: 300px;
#buttons button, #check
padding: 15px 35px;
font-size: 20px;
border: solid 3px #ff8b00;
background: #00003370;
color: #ffc800;
border-radius: 10px;
cursor:pointer;
.disabled
color:#736e5f!important;
border: solid 3px #736e5f!important;
cursor:not-allowed!important;
</style>
</head>
<body>
<div class="hero" id="head">
<div id="status">
<table>
<tr>
<td>External IP</td>
<td><span id="extip" class="placeholder"><span class="loading">...</span></span></td>
</tr>
<tr>
<td>Proxy IP</td>
<td><span id="proxyip" class="placeholder"><span class="loading">...</span></span></td>
</tr>
<tr>
<td>I2P network</td>
<td><span id="i2p" class="placeholder"><span class="loading">...</span></span></td>
</tr>
<tr>
<td>Proxy</td>
<td><span id="proxy" class="placeholder"><span class="loading">...</span></span></td>
</tr>
<tr>
<td>NoDPI</td>
<td><span id="nodpi" class="placeholder"><span class="loading">...</span></span></td>
</tr>
</table>
<button onclick="check()" id="check">Check</button>
</div>
<div id="buttons">
<button onclick="kill(this,'xray')">Restart Xray</button>
<button onclick="kill(this,'i2p')">Restart i2p</button>
<button onclick="kill(this,'nodpi')">Restart NoDPI</button>
</div>
</div>
<script>
function kill(ctl,id)
console.log(ctl, id);
ctl.disabled=true;
ctl.classList.add('disabled');
fetch('/cgi-bin/kill_one.cgi?'+id)
.then(response =>
if (!response.ok)
throw new Error(`HTTP error! status: $response.status`);
return response.json();
)
.then(data =>
window.setTimeout(check,10000);
ctl.disabled=false;
ctl.classList.remove('disabled');
check();
)
.catch(error =>
console.error('Error fetching data:', error);
ctl.disabled=false;
ctl.classList.remove('disabled');
);
function check(){
const urls = [
'xray',
'i2p',
'proxy',
'nodpi'
];
const labels = document.querySelectorAll('.placeholder');
labels.forEach(item =>
item.innerHTML = '<span class="loading">...</span>';
);
urls.forEach(url => {
fetch('/cgi-bin/check_one.cgi?'+url)
.then(response =>
if (!response.ok)
throw new Error(`HTTP error! status: $response.status`);
return response.json();
)
.then(data =>
Object.entries(data).forEach(([key, value]) =>
const element = document.getElementById(key);
if (element)
if(value == 1)
element.classList.remove('txt_error');
element.classList.add('txt_ok');
element.textContent="OK";
else if(value == 0)
element.classList.add('txt_error');
element.classList.remove('txt_ok');
element.textContent="ERROR";
else
element.classList.remove('txt_error');
element.classList.add('txt_ok');
element.textContent = value;
);
)
.catch(error =>
console.error('Error fetching data:', error);
);
});
}
check();
</script>
</body>
</html>
Result:
That’s it. Pure admin work, shell, and a bit
In case you have found a mistake in the text, please send a message to the author by selecting the mistake and pressing Ctrl-Enter.
https://techplanet.today/storage/posts/2025/01/12/6nmB1q3iLvipWaqGwMfajP5qygPVYXA3Srk543dO.webp
2025-01-12 09:42:00