提交 7724df67 编辑于 作者: openaiops's avatar openaiops
浏览文件

Initial commit

上级
加载中
加载中
加载中
加载中
+160 −0
原始行号 差异行号 差异行
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
.pybuilder/
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
#   For a library or package, you might want to ignore these files since the code is
#   intended to run in multiple environments; otherwise, check them in:
# .python-version

# pipenv
#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
#   However, in case of collaboration, if having platform-specific dependencies or dependencies
#   having no cross-platform support, pipenv may install dependencies that don't work, or not
#   install all needed dependencies.
#Pipfile.lock

# poetry
#   Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
#   This is especially recommended for binary packages to ensure reproducibility, and is more
#   commonly ignored for libraries.
#   https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock

# pdm
#   Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
#   pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
#   in version control.
#   https://pdm.fming.dev/#use-with-ide
.pdm.toml

# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# Cython debug symbols
cython_debug/

# PyCharm
#  JetBrains specific template is maintained in a separate JetBrains.gitignore that can
#  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
#  and can be added to the global gitignore or merged into this file.  For a more nuclear
#  option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/

LightAD-main/LICENSE

0 → 100644
+21 −0
原始行号 差异行号 差异行
MIT License

Copyright (c) 2023 Boxi Yu

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

LightAD-main/README.md

0 → 100644
+73 −0
原始行号 差异行号 差异行
# LightAD
A toolkit for Light Log Anomaly Detection and automated LogAD model selection. To learn more about it, please refer to our conference paper "Deep Learning or Classical Machine Learning? An Empirical
Study on Log-Based Anomaly Detection" by [ICSE'24]

You can achieve the SOTA performance on the five most popular LogAD datasets using our classical Machine Learning Methods with our simple log preprocessing techniques. Below is part of the performance comparison between the Classical Machine Learning Methods and the Deep Learning Methods:

<img src=table2.png style="width:50%;height:auto;">

Please note that I've retained the bold formatting for the highest values in each row. Additionally, I've included N/A for cells with missing data and used hyphens for cells denoting "out of memory" errors.

## Step 1: Check Python Dependencies

To install LightAD dependencies, please run:

```shell
pip install -r requirements.txt
```

## Step 2: Prepare Datasets

The example 100k HDFS dataset is under ```/datasets/orignal_datasets```

The original full datasets can be found at: (1) HDFS dataset: https://doi.org/10.5281/zenodo.1144100, (2) Supercomputer datasets: https://www.usenix.org/cfdr-data.

If you want to run LightAD on the full datasets, you should download the data from the above websites, put the corresponding files in ```/datasets/orignal_datasets``` and name them by the names of the datasets without any suffix.

For HDFS dataset, you don't have worry about the ```anamaly_label.csv```. The file contains all the labels for the full dataset.

## Step 3: Preprocess Datasets
To preprocess any dataset, please run:

```shell
python preprocess.py --dataset [dataset_you_want_to_preprocess] 
```
The [dataset_you_want_to_preprocess] can be ```hdfs```, ```bgl```, ```spirit```, ```liberty```, ```tbird```.
## Step 4: Conduct Log Anomaly Detection
### On the HDFS Dataset:

If you want to conduct anomaly detection on the entire HDFS dataset, please run:

```shell
python main_hdfs.py --model [model_you_want_to_use]
```
If you want to conduct anomaly detection on the deduplicated HDFS dataset, please run:

```shell
python main_hdfs.py --model [model_you_want_to_use] --eliminate True
```

The models that can be deployed on HDFS are ```"knn"``` (K-Nearest-Neighbor), ```"dt"``` (Decision Tree), and ```"slfn"``` (Single Layer Feed Forward Neural Network).
### On the Supercomputer Datasets:

If you want to conduct anomaly detection on the supercomputer datasets, please run:

```shell
python main_super.py --dataset [dataset_you_want_to_use]
```
The supercomputer datasets [dataset_you_want_to_use] can be ```"bgl"```, ```"tbird"```, ```"spirit"```, and ```"liberty"```. Only ```"knn"``` is leveraged for we do not preprocess supercomputer datasets into numerical vectors.
## Step 5: Select the Optimal Model

This step is performed on the deduplicated HDFS dataset which can be obtained by:
```shell
python preprocess.py --dataset hdfs --eliminate True 
```
If you want to get the ModelScore of a model (the higher the ModelScore, the better the model performs under the current optimization strategy), please run:

```shell
python main_opt.py --model [model_you_want_to_use] --l1 [importance_of_model_accuracy] --l2 [importance_of_train_time] --l3 [importance_of_infer_time]
```
```l1```, ```l2```, and ```l3``` respectively represent the relative importance of model accuracy (```F1-score```), model training time, and model inference time. When setting the weights of these three importance factors, you need to ensure that they are all greater than 0 and their sum is equal to 1.

### Feedback
Should you have any question, please post to [the issue page](https://github.com/BoxiYu/LightAD/issues), or email Boxi Yu via boxiyu@link.cuhk.edu.cn.
+0 −0

添加文件。

预览已超出大小限制,变更已折叠。

+0 −0

添加文件。

预览已超出大小限制,变更已折叠。