WIP: added tools command; closes #44 (#60)

* added tools command with placeholders for un/reinstall along with placeholder tests

* added missing docs build dependency

* updated documentation to reflect tools vs install

* refactored some code for DRY, fixed up prior merge with master

* fixed broken tests in test_recon_pipeline_shell

* existing tests all passing

* added tools list command

* added tools list command

* added tools reinstall

* removed lint

* fixed reinstall test

* fixed install go test

* fixed go install test again
This commit is contained in:
epi052
2020-06-27 21:23:16 -05:00
committed by GitHub
parent 1ad3adca82
commit 9d5cac6b34
12 changed files with 288 additions and 118 deletions

View File

@@ -58,14 +58,14 @@ pipenv install
pipenv shell
```
After installing the python dependencies, the `recon-pipeline` shell provides its own [install](https://recon-pipeline.readthedocs.io/en/latest/api/commands.html#install) command (seen below). A simple `install all` will handle all additional installation steps.
After installing the python dependencies, the `recon-pipeline` shell provides its own [tools](https://recon-pipeline.readthedocs.io/en/latest/api/commands.html#tools) command (seen below). A simple `tools install all` will handle all additional installation steps.
> Ubuntu Note (and newer kali versions): You may consider running `sudo -v` prior to running `./recon-pipeline.py`. `sudo -v` will refresh your creds, and the underlying subprocess calls during installation won't prompt you for your password. It'll work either way though.
Individual tools may be installed by running `install TOOLNAME` where `TOOLNAME` is one of the known tools that make
Individual tools may be installed by running `tools install TOOLNAME` where `TOOLNAME` is one of the known tools that make
up the pipeline.
The installer maintains a (naive) list of installed tools at `~/.local/recon-pipeline/tools/.tool-dict.pkl`. The installer in no way attempts to be a package manager. It knows how to execute the steps necessary to install its tools. Beyond that, it's like Jon Snow, **it knows nothing**.
The installer maintains a (naive) list of installed tools at `~/.local/recon-pipeline/tools/.tool-dict.pkl`. The installer in no way attempts to be a package manager. It knows how to execute the steps necessary to install and remove its tools. Beyond that, it's like Jon Snow, **it knows nothing**.
[![asciicast](https://asciinema.org/a/318395.svg)](https://asciinema.org/a/318395)
@@ -239,7 +239,7 @@ The backbone of this pipeline is spotify's [luigi](https://github.com/spotify/lu
- Make sure two instances of the same task are not running simultaneously
- Provide visualization of everything thats going on
While in the `recon-pipeline` shell, running `install luigi-service` will copy the `luigid.service` file provided in the
While in the `recon-pipeline` shell, running `tools install luigi-service` will copy the `luigid.service` file provided in the
repo to its appropriate systemd location and start/enable the service. The result is that the central scheduler is up
and running easily.

View File

@@ -9,23 +9,23 @@ Commands
``recon-pipeline`` provides a handful of commands:
- ``install``
- ``scan``
- ``status``
- ``database``
- ``view``
- :ref:`tools_command`
- :ref:`scan_command`
- :ref:`status_command`
- :ref:`database_command`
- :ref:`view_command`
All other available commands are inherited from `cmd2 <https://github.com/python-cmd2/cmd2>`_.
.. _install_command:
.. _tools_command:
install
#######
tools
#####
.. argparse::
:module: pipeline.recon
:func: install_parser
:prog: install
:func: tools_parser
:prog: tools
.. _database_command:

View File

@@ -51,15 +51,15 @@ Both OSs After ``pipenv`` Install
Everything Else
###############
After installing the python dependencies, the recon-pipeline shell provides its own :ref:`install_command` command (seen below).
A simple ``install all`` will handle all installation steps. Installation has **only** been tested on **Kali 2019.4 and Ubuntu 18.04/20.04**.
After installing the python dependencies, the recon-pipeline shell provides its own :ref:`tools_command` command (seen below).
A simple ``tools install all`` will handle all installation steps. Installation has **only** been tested on **Kali 2019.4 and Ubuntu 18.04/20.04**.
**Ubuntu Note (and newer kali versions)**: You may consider running ``sudo -v`` prior to running ``./recon-pipeline.py``. ``sudo -v`` will refresh your creds, and the underlying subprocess calls during installation won't prompt you for your password. It'll work either way though.
Individual tools may be installed by running ``install TOOLNAME`` where ``TOOLNAME`` is one of the known tools that make
Individual tools may be installed by running ``tools install TOOLNAME`` where ``TOOLNAME`` is one of the known tools that make
up the pipeline.
The installer maintains a (naive) list of installed tools at ``~/.local/recon-pipeline/tools/.tool-dict.pkl``. The installer in no way attempts to be a package manager. It knows how to execute the steps necessary to install its tools. Beyond that, it's
The installer maintains a (naive) list of installed tools at ``~/.local/recon-pipeline/tools/.tool-dict.pkl``. The installer in no way attempts to be a package manager. It knows how to execute the steps necessary to install and remove its tools. Beyond that, it's
like Jon Snow, **it knows nothing**.
Tools can also be uninstalled using the ``uninstall all`` command. It is also possible to individually uninstall them in the same manner as shown above.

View File

@@ -11,10 +11,8 @@ provides the following two benefits:
- Make sure two instances of the same task are not running simultaneously
- Provide :ref:`visualization <visualization-ref-label>` of everything thats going on
While in the ``recon-pipeline`` shell, running ``install luigi-service`` will copy the ``luigid.service``
While in the ``recon-pipeline`` shell, running ``tools install luigi-service`` will copy the ``luigid.service``
file provided in the repo to its appropriate systemd location and start/enable the service. The result is that the
central scheduler is up and running easily.
The other option is to add ``--local-scheduler`` to your :ref:`scan_command` command from within the ``recon-pipeline`` shell.

View File

@@ -7,7 +7,7 @@ Setup
#####
To use the web console, you'll need to :ref:`install the luigid service<install-ref-label>`. Assuming you've already
installed ``pipenv`` and created a virtual environment, you can simply run the ``install luigi-service``
installed ``pipenv`` and created a virtual environment, you can simply run the ``tools install luigi-service``
from within the pipeline.
Dashboard

View File

@@ -12,7 +12,9 @@ import selectors
import threading
import subprocess
import webbrowser
from enum import IntEnum
from pathlib import Path
from typing import List, NewType
DEFAULT_PROMPT = "recon-pipeline> "
@@ -62,15 +64,18 @@ from .models.searchsploit_model import SearchsploitResult # noqa: F401,E402
from .recon import ( # noqa: F401,E402
get_scans,
scan_parser,
install_parser,
uninstall_parser,
view_parser,
tools_parser,
status_parser,
database_parser,
db_detach_parser,
db_list_parser,
db_attach_parser,
db_delete_parser,
view_parser,
db_detach_parser,
db_list_parser,
tools_list_parser,
tools_install_parser,
tools_uninstall_parser,
tools_reinstall_parser,
target_results_parser,
endpoint_results_parser,
nmap_results_parser,
@@ -81,6 +86,14 @@ from .recon import ( # noqa: F401,E402
from .tools import tools # noqa: F401,E402
class ToolAction(IntEnum):
INSTALL = 0
UNINSTALL = 1
ToolActions = NewType("ToolActions", ToolAction)
# select loop, handles async stdout/stderr processing of subprocesses
selector = selectors.DefaultSelector()
@@ -144,6 +157,10 @@ class ReconShell(cmd2.Cmd):
technology_results_parser.set_defaults(func=self.print_webanalyze_results)
searchsploit_results_parser.set_defaults(func=self.print_searchsploit_results)
port_results_parser.set_defaults(func=self.print_port_results)
tools_install_parser.set_defaults(func=self.tools_install)
tools_reinstall_parser.set_defaults(func=self.tools_reinstall)
tools_uninstall_parser.set_defaults(func=self.tools_uninstall)
tools_list_parser.set_defaults(func=self.tools_list)
def _preloop_hook(self) -> None:
""" Hook function that runs prior to the cmdloop function starting; starts the selector loop. """
@@ -336,11 +353,46 @@ class ReconShell(cmd2.Cmd):
return tools
@cmd2.with_argparser(install_parser)
def do_install(self, args):
def _finalize_tool_action(self, tool: str, tool_dict: dict, return_values: List[int], action: ToolActions):
""" Internal helper to keep DRY
Args:
tool: tool on which the action has been performed
tool_dict: tools dictionary to save
return_values: accumulated return values of subprocess calls
action: ToolAction.INSTALL or ToolAction.UNINSTALL
"""
verb = ["install", "uninstall"][action.value]
if all(x == 0 for x in return_values):
# all return values in retvals are 0, i.e. all exec'd successfully; tool action has succeeded
self.poutput(style(f"[+] {tool} {verb}ed!", fg="bright_green"))
tool_dict[tool]["installed"] = True if action == ToolAction.INSTALL else False
else:
# unsuccessful tool action
tool_dict[tool]["installed"] = False if action == ToolAction.INSTALL else True
self.poutput(
style(
f"[!!] one (or more) of {tool}'s commands failed and may have not {verb}ed properly; check output from the offending command above...",
fg="bright_red",
bold=True,
)
)
# store any tool installs/failures (back) to disk
persistent_tool_dict = self.tools_dir / ".tool-dict.pkl"
pickle.dump(tool_dict, persistent_tool_dict.open("wb"))
def tools_install(self, args):
""" Install any/all of the libraries/tools necessary to make the recon-pipeline function. """
tools = self._get_dict()
if args.tool == "all":
# show all tools have been queued for installation
[
@@ -350,7 +402,7 @@ class ReconShell(cmd2.Cmd):
]
for tool in tools.keys():
self.do_install(tool)
self.do_tools(f"install {tool}")
return
@@ -366,7 +418,17 @@ class ReconShell(cmd2.Cmd):
)
# install the dependency before continuing with installation
self.do_install(dependency)
self.do_tools(f"install {dependency}")
# this prevents a stale copy of tools when dependency installs alter the state
# ex.
# amass (which depends on go) grabs copy of tools (go installed false)
# amass calls install with go as the arg
# go grabs a copy of tools
# go is installed and state is saved (go installed true)
# recursion goes back to amass call (go installed false due to stale tools data)
# amass installs and re-saves go's state as installed=false
tools = self._get_dict()
if tools.get(args.tool).get("installed"):
return self.poutput(style(f"[!] {args.tool} is already installed.", fg="yellow"))
@@ -408,33 +470,12 @@ class ReconShell(cmd2.Cmd):
retvals.append(proc.returncode)
if all(x == 0 for x in retvals):
# all return values in retvals are 0, i.e. all exec'd successfully; tool has been installed
self._finalize_tool_action(args.tool, tools, retvals, ToolAction.INSTALL)
self.poutput(style(f"[+] {args.tool} installed!", fg="bright_green"))
tools[args.tool]["installed"] = True
else:
# unsuccessful tool install
tools[args.tool]["installed"] = False
self.poutput(
style(
f"[!!] one (or more) of {args.tool}'s commands failed and may have not installed properly; check output from the offending command above...",
fg="bright_red",
bold=True,
)
)
# store any tool installs/failures (back) to disk
persistent_tool_dict = self.tools_dir / ".tool-dict.pkl"
pickle.dump(tools, persistent_tool_dict.open("wb"))
@cmd2.with_argparser(uninstall_parser)
def do_uninstall(self, args):
def tools_uninstall(self, args):
""" Uninstall any/all of the libraries/tools used by recon-pipeline"""
tools = self._get_dict()
if args.tool == "all":
# show all tools have been queued for installation
[
@@ -444,7 +485,7 @@ class ReconShell(cmd2.Cmd):
]
for tool in tools.keys():
self.do_uninstall(tool)
self.do_tools(f"uninstall {tool}")
return
@@ -454,6 +495,7 @@ class ReconShell(cmd2.Cmd):
retvals = list()
self.poutput(style(f"[*] Removing {args.tool}...", fg="bright_yellow"))
if not tools.get(args.tool).get("uninstall_commands"):
self.poutput(style(f"[*] {args.tool} removal not needed", fg="bright_yellow"))
return
@@ -470,28 +512,28 @@ class ReconShell(cmd2.Cmd):
retvals.append(proc.returncode)
if all(x == 0 for x in retvals):
# all return values in retvals are 0, i.e. all exec'd successfully; tool has been uninstalled
self._finalize_tool_action(args.tool, tools, retvals, ToolAction.UNINSTALL)
self.poutput(style(f"[+] {args.tool} removed!", fg="bright_green"))
def tools_reinstall(self, args):
""" Reinstall a given tool """
self.do_tools(f"uninstall {args.tool}")
self.do_tools(f"install {args.tool}")
tools[args.tool]["installed"] = False
def tools_list(self, args):
""" List status of pipeline tools """
for key, value in self._get_dict().items():
status = [style(":Missing:", fg="bright_magenta"), style("Installed", fg="bright_green")]
self.poutput(style(f"[{status[value.get('installed')]}] - {value.get('path') or key}"))
@cmd2.with_argparser(tools_parser)
def do_tools(self, args):
""" Manage tool actions (install/uninstall/reinstall) """
func = getattr(args, "func", None)
if func is not None:
func(args)
else:
# unsuccessful tool removal
tools[args.tool]["installed"] = True
self.poutput(
style(
f"[!!] one (or more) of {args.tool}'s commands failed and may have not been removed properly; check output from the offending command above...",
fg="bright_red",
bold=True,
)
)
# store any tool installs/failures (back) to disk
persistent_tool_dict = self.tools_dir / ".tool-dict.pkl"
pickle.dump(tools, persistent_tool_dict.open("wb"))
self.do_help("tools")
@cmd2.with_argparser(status_parser)
def do_status(self, args):
@@ -825,7 +867,7 @@ class ReconShell(cmd2.Cmd):
def do_view(self, args):
""" View results of completed scans """
if self.db_mgr is None:
return self.poutput(style("[!] you are not connected to a database", fg="magenta"))
return self.poutput(style("[!] you are not connected to a database", fg="bright_magenta"))
func = getattr(args, "func", None)

View File

@@ -6,16 +6,19 @@ from .masscan import MasscanScan, ParseMasscanOutput
from .nmap import ThreadedNmapScan, SearchsploitScan
from .config import top_udp_ports, top_tcp_ports, defaults, web_ports
from .parsers import (
install_parser,
uninstall_parser,
scan_parser,
view_parser,
tools_parser,
status_parser,
database_parser,
db_attach_parser,
db_delete_parser,
db_detach_parser,
db_list_parser,
view_parser,
tools_list_parser,
tools_install_parser,
tools_uninstall_parser,
tools_reinstall_parser,
target_results_parser,
endpoint_results_parser,
nmap_results_parser,

View File

@@ -6,14 +6,6 @@ from .config import defaults
from .helpers import get_scans
from ..tools import tools
# options for ReconShell's 'install' command
install_parser = cmd2.Cmd2ArgumentParser()
install_parser.add_argument("tool", help="which tool to install", choices=list(tools.keys()) + ["all"])
# options for ReconShell's 'uninstall' command
uninstall_parser = cmd2.Cmd2ArgumentParser()
uninstall_parser.add_argument("tool", help="which tool to uninstall", choices=list(tools.keys()) + ["all"])
# options for ReconShell's 'status' command
status_parser = cmd2.Cmd2ArgumentParser()
status_parser.add_argument(
@@ -110,6 +102,24 @@ db_delete_parser = database_subparsers.add_parser("delete", help="Delete the sel
db_attach_parser = database_subparsers.add_parser("attach", help="Attach to the selected database")
db_detach_parser = database_subparsers.add_parser("detach", help="Detach from the currently attached database")
# top level and subparsers for ReconShell's tools command
tools_parser = cmd2.Cmd2ArgumentParser()
tools_subparsers = tools_parser.add_subparsers(
title="subcommands", help="Manage tool actions (install/uninstall/reinstall)"
)
tools_install_parser = tools_subparsers.add_parser(
"install", help="Install any/all of the libraries/tools necessary to make the recon-pipeline function"
)
tools_install_parser.add_argument("tool", help="which tool to install", choices=list(tools.keys()) + ["all"])
tools_uninstall_parser = tools_subparsers.add_parser("uninstall", help="Remove the already installed tool")
tools_uninstall_parser.add_argument("tool", help="which tool to uninstall", choices=list(tools.keys()) + ["all"])
tools_reinstall_parser = tools_subparsers.add_parser("reinstall", help="Uninstall and then Install a given tool")
tools_reinstall_parser.add_argument("tool", help="which tool to reinstall", choices=list(tools.keys()) + ["all"])
tools_list_parser = tools_subparsers.add_parser("list", help="Show status of pipeline tools")
# ReconShell's view command
view_parser = cmd2.Cmd2ArgumentParser()

View File

@@ -8,4 +8,4 @@ install_commands:
- !join ["bash -c 'if [ ! $(grep $(dirname", *gotool, ")", *bashrc, ") ]; then echo PATH=${PATH}:$(dirname", *gotool, ") >>", *bashrc, "; fi'"]
uninstall_commands:
- !join [sudo, rm, -r, !get_default "{goroot}"]
- !join [sudo, rm, -r, !join_path [!get_default "{goroot}", go]]

View File

@@ -6,8 +6,9 @@ from pipeline.tools import tools
@pytest.mark.parametrize("test_input", list(tools.keys()) + ["all"])
def test_install_parser_good(test_input):
parsed = install_parser.parse_args([test_input])
def test_tools_parsers_good(test_input):
for parser in [tools_install_parser, tools_uninstall_parser, tools_reinstall_parser]:
parsed = parser.parse_args([test_input])
assert parsed.tool == test_input
@@ -21,9 +22,10 @@ def test_install_parser_good(test_input):
(["all", "--invalid"], SystemExit),
],
)
def test_install_parser_raises(test_input, expected):
def test_tools_parsers_raises(test_input, expected):
for parser in [tools_install_parser, tools_uninstall_parser, tools_reinstall_parser]:
with pytest.raises(expected):
install_parser.parse_args([test_input])
parser.parse_args([test_input])
@pytest.mark.parametrize(

View File

@@ -293,12 +293,25 @@ class TestReconShell:
self.shell.do_status("--host 127.0.0.1 --port 1111")
assert mock_browser.called
# ("all", "commands failed and may have not installed properly", 1)
@pytest.mark.parametrize("test_input, expected", [(None, "Manage tool actions (install/uninstall/reinstall)")])
def test_do_tools(self, test_input, expected, capsys):
if test_input is None:
self.shell.do_tools("")
assert expected in capsys.readouterr().out
# after tools moved to DB, update this test
@pytest.mark.parametrize(
"test_input, expected, return_code", [("all", "is already installed", 0), ("amass", "dependency", 1)]
"test_input, expected, return_code",
[
("all", "[-] go queued", 0),
("amass", "check output from the offending command above", 1),
("amass", "has an unmet dependency", 0),
("waybackurls", "[!] waybackurls has an unmet dependency", 0),
("go", "[+] go installed!", 0),
("masscan", "[!] masscan is already installed.", 0),
],
)
def test_do_install(self, test_input, expected, return_code, capsys, tmp_path):
def test_tools_install(self, test_input, expected, return_code, capsys, tmp_path):
process_mock = MagicMock()
attrs = {"communicate.return_value": (b"output", b"error"), "returncode": return_code}
process_mock.configure_mock(**attrs)
@@ -306,17 +319,89 @@ class TestReconShell:
tooldir = tmp_path / ".local" / "recon-pipeline" / "tools"
tooldir.mkdir(parents=True, exist_ok=True)
tools["go"]["installed"] = False
tools["waybackurls"]["installed"] = True
tools["masscan"]["installed"] = True
tools["amass"]["shell"] = False
tools["amass"]["installed"] = False
pickle.dump(tools, (tooldir / ".tool-dict.pkl").open("wb"))
with patch("subprocess.Popen", autospec=True) as mocked_popen:
mocked_popen.return_value = process_mock
self.shell.tools_dir = tooldir
self.shell.do_install(test_input)
self.shell.do_tools(f"install {test_input}")
if test_input != "masscan":
assert mocked_popen.called
assert expected in capsys.readouterr().out
if test_input != "all" and return_code == 0:
assert self.shell._get_dict().get(test_input).get("installed") is True
# after tools moved to DB, update this test
@pytest.mark.parametrize(
"test_input, expected, return_code",
[
("all", "waybackurls queued", 0),
("amass", "check output from the offending command above", 1),
("waybackurls", "[+] waybackurls uninstalled!", 0),
("go", "[!] go is not installed", 0),
],
)
def test_tools_uninstall(self, test_input, expected, return_code, capsys, tmp_path):
process_mock = MagicMock()
attrs = {"communicate.return_value": (b"output", b"error"), "returncode": return_code}
process_mock.configure_mock(**attrs)
tooldir = tmp_path / ".local" / "recon-pipeline" / "tools"
tooldir.mkdir(parents=True, exist_ok=True)
tools["go"]["installed"] = False
tools["waybackurls"]["installed"] = True
tools["amass"]["shell"] = False
tools["amass"]["installed"] = True
pickle.dump(tools, (tooldir / ".tool-dict.pkl").open("wb"))
with patch("subprocess.Popen", autospec=True) as mocked_popen:
mocked_popen.return_value = process_mock
self.shell.tools_dir = tooldir
self.shell.do_tools(f"uninstall {test_input}")
if test_input != "go":
assert mocked_popen.called
assert expected in capsys.readouterr().out
if test_input != "all" and return_code == 0:
assert self.shell._get_dict().get(test_input).get("installed") is False
def test_tools_reinstall(self, capsys):
self.shell.do_tools("reinstall amass")
output = capsys.readouterr().out
assert "[*] Removing amass..." in output or "[!] amass is not installed." in output
assert "[*] Installing amass..." in output or "[!] amass is already installed." in output
def test_tools_list(self, capsys, tmp_path):
tooldir = tmp_path / ".local" / "recon-pipeline" / "tools"
tooldir.mkdir(parents=True, exist_ok=True)
tools["go"]["installed"] = True
tools["waybackurls"]["installed"] = True
tools["masscan"]["installed"] = False
regexes = [r"Installed.*go/bin/go", r"Installed.*bin/waybackurls", r":Missing:.*tools/masscan"]
pickle.dump(tools, (tooldir / ".tool-dict.pkl").open("wb"))
self.shell.tools_dir = tooldir
self.shell.do_tools("list")
output = capsys.readouterr().out
for regex in regexes:
assert re.search(regex, output)
@pytest.mark.parametrize(
"test_input, expected, db_mgr",
[

View File

@@ -1,3 +1,4 @@
import os
import pickle
import shutil
import tempfile
@@ -19,6 +20,7 @@ class TestUnmockedToolsInstall:
self.tmp_path = Path(tempfile.mkdtemp())
self.shell.tools_dir = self.tmp_path / ".local" / "recon-pipeline" / "tools"
self.shell.tools_dir.mkdir(parents=True, exist_ok=True)
os.chdir(self.shell.tools_dir)
def teardown_method(self):
def onerror(func, path, exc_info):
@@ -31,27 +33,38 @@ class TestUnmockedToolsInstall:
pickle.dump(tools_dict, Path(self.shell.tools_dir / ".tool-dict.pkl").open("wb"))
tool = Path(tools_dict.get(tool_name).get("path"))
if install and exists is False:
assert tool.exists() is False
elif not install and exists is True:
assert tool.exists() is True
if install:
utils.run_cmd(self.shell, f"install {tool_name}")
utils.run_cmd(self.shell, f"tools install {tool_name}")
assert tool.exists() is True
else:
utils.run_cmd(self.shell, f"uninstall {tool_name}")
utils.run_cmd(self.shell, f"tools uninstall {tool_name}")
assert tool.exists() is False
def setup_go_test(self, tool_name, tool_dict):
# install go in tmp location
dependency = "go"
dependency_path = f"{self.shell.tools_dir}/go/bin/go"
tmp_path = tempfile.mkdtemp()
tool_dict.get(dependency)["path"] = dependency_path
tool_dict.get(dependency).get("install_commands")[1] = f"tar -C {self.shell.tools_dir} -xvf /tmp/go.tar.gz"
tool_dict.get(dependency).get("install_commands")[
0
] = f"wget -q https://dl.google.com/go/go1.14.4.linux-amd64.tar.gz -O {tmp_path}/go.tar.gz"
tool_dict.get(dependency).get("install_commands")[
1
] = f"tar -C {self.shell.tools_dir} -xvf {tmp_path}/go.tar.gz"
tool_dict.get(dependency).get("uninstall_commands")[0] = f"rm -rvf {self.shell.tools_dir}/go"
tool_dict[dependency]["uninstall_commands"].append(f"rm -rvf {tmp_path}")
# handle env for local go install
if tool_name != "go":
tmp_go_path = f"{self.shell.tools_dir}/mygo"
Path(tmp_go_path).mkdir(parents=True, exist_ok=True)
tool_dict.get(tool_name)["environ"]["GOPATH"] = tmp_go_path
@@ -60,6 +73,10 @@ class TestUnmockedToolsInstall:
tool_dict.get(tool_name)["path"] = tool_path
tool_dict.get(tool_name)["installed"] = False
tool_dict.get(dependency)["installed"] = False
print(tool_dict.get(tool_name))
print(tool_dict.get(dependency))
return tool_dict
@@ -67,10 +84,18 @@ class TestUnmockedToolsInstall:
tool = "masscan"
tools_copy = tools.copy()
tmp_path = tempfile.mkdtemp()
tool_path = f"{self.shell.tools_dir}/{tool}"
tools_copy.get(tool)["path"] = tool_path
tools_copy.get(tool).get("install_commands")[2] = f"mv /tmp/masscan/bin/masscan {tool_path}"
tools_copy.get(tool)["installed"] = False
tools_copy.get(tool).get("install_commands")[
0
] = f"git clone https://github.com/robertdavidgraham/masscan {tmp_path}/masscan"
tools_copy.get(tool).get("install_commands")[1] = f"make -s -j -C {tmp_path}/masscan"
tools_copy.get(tool).get("install_commands")[2] = f"mv {tmp_path}/masscan/bin/masscan {tool_path}"
tools_copy.get(tool).get("install_commands")[3] = f"rm -rf {tmp_path}/masscan"
tools_copy.get(tool).get("install_commands")[4] = f"sudo setcap CAP_NET_RAW+ep {tool_path}"
tools_copy.get(tool).get("uninstall_commands")[0] = f"rm {tool_path}"
@@ -96,9 +121,18 @@ class TestUnmockedToolsInstall:
tools_copy = tools.copy()
tool_path = f"{self.shell.tools_dir}/{tool}"
tmp_path = tempfile.mkdtemp()
tools_copy.get(tool)["path"] = tool_path
tools_copy.get(tool).get("install_commands")[4] = f"mv /tmp/aquatone/aquatone {tool_path}"
tools_copy.get(tool).get("install_commands")[0] = f"mkdir /{tmp_path}/aquatone"
tools_copy.get(tool).get("install_commands")[
1
] = f"wget -q https://github.com/michenriksen/aquatone/releases/download/v1.7.0/aquatone_linux_amd64_1.7.0.zip -O /{tmp_path}/aquatone/aquatone.zip"
tools_copy.get(tool).get("install_commands")[
3
] = f"unzip /{tmp_path}/aquatone/aquatone.zip -d /{tmp_path}/aquatone"
tools_copy.get(tool).get("install_commands")[4] = f"mv /{tmp_path}/aquatone/aquatone {tool_path}"
tools_copy.get(tool).get("install_commands")[5] = f"rm -rf /{tmp_path}/aquatone"
tools_copy.get(tool).get("uninstall_commands")[0] = f"rm {tool_path}"
self.perform_add_remove(tools_copy, tool, True, False)
@@ -108,11 +142,7 @@ class TestUnmockedToolsInstall:
tool = "go"
tools_copy = tools.copy()
tool_path = f"{self.shell.tools_dir}/go/bin/go"
tools_copy.get(tool)["path"] = tool_path
tools_copy.get(tool).get("install_commands")[1] = f"tar -C {self.shell.tools_dir} -xvf /tmp/go.tar.gz"
tools_copy.get(tool).get("uninstall_commands")[0] = f"sudo rm -r {self.shell.tools_dir}"
tools_copy.update(self.setup_go_test(tool, tools_copy))
self.perform_add_remove(tools_copy, tool, True, False)
self.perform_add_remove(tools_copy, tool, False, True)
@@ -173,7 +203,7 @@ class TestUnmockedToolsInstall:
assert not Path("/usr/local/bin/luigid").exists()
utils.run_cmd(self.shell, "install luigi-service")
utils.run_cmd(self.shell, "tools install luigi-service")
assert Path("/lib/systemd/system/luigid.service").exists()
@@ -185,7 +215,7 @@ class TestUnmockedToolsInstall:
assert Path("/usr/local/bin/luigid").exists()
utils.run_cmd(self.shell, "uninstall luigi-service")
utils.run_cmd(self.shell, "tools uninstall luigi-service")
proc = subprocess.run("systemctl is-enabled luigid.service".split(), stdout=subprocess.PIPE)
assert proc.stdout.decode().strip() != "enabled"
@@ -266,12 +296,12 @@ class TestUnmockedToolsInstall:
assert not Path(copied_searchsploit_rc).exists()
assert not Path(dependency_path).exists()
utils.run_cmd(self.shell, f"install {tool}")
utils.run_cmd(self.shell, f"tools install {tool}")
assert subprocess.run(f"grep {self.shell.tools_dir} {copied_searchsploit_rc}".split()).returncode == 0
assert Path(copied_searchsploit_rc).exists()
assert Path(dependency_path).exists()
utils.run_cmd(self.shell, f"uninstall {tool}")
utils.run_cmd(self.shell, f"tools uninstall {tool}")
assert Path(dependency_path).exists() is False
@pytest.mark.parametrize("test_input", ["install", "update"])