Wget and Curl
wget --recursive --no-parent --no-clobber --no-check-certificate -P ./ https://website.com/path/to/specific/folder/
wget --recursive --no-parent --no-clobber --no-check-certificate -P ./ https://website.com/path/to/specific/folder/
How to consolidate rows (to get grouped sums) How to reference a cell value from another sheet How to delete blank rows in excel table Simply filter the table by blanks, and then delete those filtered rows
A friendly reminder than, when using .pem keys, make sure to set permissions to 600: chmod 600 file.pem
Just type “cntrl + W” and the menu on the bottom of the page will show you the naviagation options (e.g. cntrl-v for bottom of page, cntrl-y for top of page, etc)
import argparse if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument("--gen_template", action='store_true', help="Generate template file for adding a new user.") parser.add_argument("--init_user", default=77, help="Initialize user-specific files and data directories.") parser.add_argument("--user_json", type=str, required=False, help="Path to JSON file completed from template (required with --init_user).") args = parser.parse_args() main(args)
#!/bin/bash cd ../main ps -ef | grep 'python app.py' | grep -v grep | awk '{print \$2}' | xargs kill git pull rm nohup.out nohup python app.py --save_session_data & the first line puts us into the main folder the second line kills any currently running python jobs (from out app) the third line pulls the repo for any updates the fourth line clears out the log file the last line runs the app....
#!/bin/bash #SBATCH -p gpu #SBATCH -t 24:00:00 cd /main/working/dir module load modules/anaconda3/4.3.1 conda activate env export data_path="/some/data/path" export HUGGINGFACE_HUB_CACHE=$data_path export PIP_CACH_DIR=$data_path export TRANSFORMERS_CACHE=$data_path python script.py
12 steps to a website: Step 1: Buy a domain name Square space, go daddy, and google domains are some of your top sources. Step 2: great a github repo for your code. Make sure it’s public
Today we will discuss how to run automated testing for a chatbot. The elements of automated testing include: job status logging: Keeping track on the testing progression caching: storing looped indexes to avoid repeated testing Chat artifacts themselves: Since this is for a chatbot, we will be storing all the chat logs for future analysis. Each artifact should have a timestamp to verify with logging progression, as well as unique user ID to re-evaluate origin of simulation Keep track of the number of API calls
Below is a sequential way of batching data by year and ingesting as a set of csv files from a SQL server. install.packages("data.table") install.packages("RODBC") library(data.table) library(RODBC) cxn <- odbcDriverConnect('Driver={ODBC DRIVER 13 for SQL Server};Server=website.com;Trusted_Connection=yes') file_base = "base/dir/folder/for/data" table_name <- "schema.table_a" date_column <- "date_col" for (year in 1950:2020){ print(paste("starting year", year)) start_date <- paste0(year,'-01-01') end_date <- paste0(year,'-12-31') query <- sprintf("select * from %s where %s between %s and %s", table_name, date_column, start_date, end_date) df <- sqlQuery(cxn,query) df = data....