Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 07/09/18 in all areas

  1. Can-bus is a really simple communication protocol originally made for cars, but these days they are used for anything, even in subsea christmas trees. For starting with CAN the wiki page is surprisingly good and is a nice starting point along with this url:https://opensource.lely.com/canopen/docs/cmd-tutorial/ Anyways, so I will share two simple codebases, be warned! The code is shitty and both together was coded in less than a week which is why its a uncommented mess (literally made with a knife on my throat as a project saving kung-fu in an EU project). The code is without any lisence, but sadly I cannot show you the actual usage of the code as its properitary but its farily simple so I will just inline it here: canbus_communicator = new CanThread("vcan0"); paxterGen3Tpdo = new PaxterGen3Tpdo(); canbus_communicator->addNode((CanMethodResolver *) paxterGen3Tpdo); canbus_communicator->start(); The C version is very hacky, the first constraint was to write the software in C which is nice as I like C, but I havent programmed in it in a couple of years but it uses the deadly sin of "OOP function pointers", which can be hacky when distributing multiple signals in parallel. So we start with defining a simple canbus reader (implementation in the .c file): enum { INVALID_LENGTH_ARGUMENT = -1 }; struct canbus_reader { int canbus_socket; char *ifname; int(*read_frame)(struct canbus_reader *, int *, char [8], unsigned *); int(*write_frame)(struct canbus_reader, int, const char *, unsigned); }; typedef struct canbus_reader canbus_reader_t; canbus_reader_t *canbus_reader_create(char *ifname, bool block); void canbus_reader_destroy(canbus_reader_t *reader); So far, pretty clean, the pointers here are just for doing reading and writing contained within the namespace. To ease the parallelization processes we thus wrap this into a canbus thread with the following api: struct canbus_thread; typedef struct canbus_thread canbus_thread_t; typedef int (*frame_handler_func)(int, char*, unsigned); enum { MAXIMUM_AMOUNTS_OF_METHODS_PER_THREAD = 1 << 4 }; //__BEGIN_API /** * Creates a handle for a canbus thread * * @param ifname The network interface name to listen to, preferably a can interface * @return A new canbus thread wrapper */ canbus_thread_t *canbus_thread_create(char *ifname); /** * * The canbus thread can handle a frame in multiple ways depending on how the different listeners requires the data * @param canbus_reader The reader itself * @param func_pointer A function pointer which parses the processed can data on the format (id, data, len) * @return 0 if successful else -1 */ int add_method_to_canbus_thread_handler(canbus_thread_t *canbus_reader, frame_handler_func func); int start_thread(canbus_thread_t *thread); // THIS SHOULD PROBABLY BE REFACTORED TO THREAD STRUCT FOR OOPness :D, note: this is retarded void canbus_thread_destroy(canbus_thread_t *canbusThread); //__END_API_ Still seems... kinda clean, but also shit. Whatever, it was hastely pulled together. So we inspect this retarded programmers C file to see the struct, because surely, they know how to program C in an embedded environment...right? The thread wrapper has the following struct: struct canbus_thread { canbus_reader_t *reader; bool isRunning; pthread_t _thread; int num_methods; frame_handler_func frame_handler_functions[MAXIMUM_AMOUNTS_OF_METHODS_PER_THREAD]; }; WTF, no one would be stupid enough to have an array of function handles in order to reduce the code to work like in a modern OOP env in C? Well, sorry to say that I am that retard. So doing simple things such as creating a running a thread turns into this abomination: void *run_can_thread(void *arg) { int id; unsigned len; char data[8]; canbus_thread_t *canbus_thread = (canbus_thread_t *) arg; DLOG(INFO, "[%s] Thread func start \n", (canbus_thread->reader->ifname)); while (canbus_thread->isRunning) { if (canbus_thread->reader->read_frame(canbus_thread->reader, &id, data, &len) > 0) { for (int i = 0; i < MAXIMUM_AMOUNTS_OF_METHODS_PER_THREAD; i++) { if ((*canbus_thread->frame_handler_functions[i]) != NULL) { fprintf(stdout, "I am thread %s calling the func now!\n", canbus_thread->reader->ifname); (*canbus_thread->frame_handler_functions[i])(id, data, len); } } } } DLOG(INFO, "[%s] Thread func stop \n", (canbus_thread->reader->ifname)); return NULL; } With the implementation of sensors as you see in the C repository we got the message that we could use C++. This wa actually one of my first time using C++, but given it was an embedded env it was basically just really nice C. Which means we solve the above things with simple classes like: class CanMethodResolver { public: virtual int handle_frame(int id, char *data, unsigned len) = 0; }; Which allows you to define in interface with an external component (like NodeJ1939 in a car) as following: NodeJ1939::NodeJ1939() { msgCount1 = 0; msgCount2 = 0; msg3State = false; } int NodeJ1939::handle_frame(int id, char *data, unsigned len) { if ((id & CAN_EFF_MASK) == ID.MESSAGE1) { return appendMessage1(data, len); } else if ((id & CAN_EFF_MASK) == ID.MESSAGE2) { return appendMessage2(data, len); } else if ((id & CAN_EFF_MASK) == ID.MESSAGE3) { if (!msg3State) { msg3State = true; return appendMessage30(data, len); } else { msg3State = false; return appendMessage31(data, len); } } return 0; } int NodeJ1939::appendMessage1(char *data, unsigned len) { maxVolt = ((float) ((data[0] << 8) | data[1])) / 10; maxCurr = ((float) ((data[2] << 8) | data[3])) / 10; charging = !data[4]; msgCount1++; return 0; } int NodeJ1939::appendMessage2(char *data, unsigned len) { volt = ((float) ((data[0] << 8) | data[1])) / 10; curr = ((float) ((data[2] << 8) | data[3])) / 10; hwFail = (data[4] & 0x1); tempFail = (data[4] & 0x2); voltFail = (data[4] & 0x4); comFail = (data[4] & 0x10); msgCount2++; return 0; } int NodeJ1939::appendMessage30(char *data, unsigned len) { nomAhr = ((float) ((data[0] << 8) | data[1])) / 10; storedAhr = ((float) ((data[2] << 8) | data[3])) / 10; actualCurr = ((float) (((data[4] & 0x7f) << 8) | data[5])) / 10; actualPackVolt = ((float) ((data[6] << 8) | data[7])) / 10; soc = 100 * (storedAhr) / (nomAhr); return 0; } int NodeJ1939::appendMessage31(char *data, unsigned len) { maxCellVolt = ((float) ((data[0] << 8) | data[1])) / 1000; minCellVolt = ((float) ((data[2] << 8) | data[3])) / 1000; maxCellTemp = ((float) (((data[4] << 8) | data[5]) - 200)) / 10; minCellTemp = ((float) (((data[4] << 8) | data[5]) - 200)) / 10; return 0; } int NodeJ1939::appendMessage1X(char *data, unsigned len) { return 0; } By simple inheritence. class NodeJ1939 : CanMethodResolver { public: NodeJ1939(); int handle_frame(int id, char * data, unsigned len); struct ID{ static const int MESSAGE1 = 0x1806E5F4; static const int MESSAGE2 = 0x18FF50E5; static const int MESSAGE3 = 0x18075000; static const int MESSAGE1X = 0x1806E6F4; } ID; .................omitted. I will upload both the C and C++ repositories once I find a decent way of sharing with the members of HAXME without exposing it completlly
    4 points
  2. Do you have that .cap file you got by deauthing your asshole neighbor that you just cannot seem to crack even when using GPU accelerated cracking? Yea, me neither, I totally would NEVER do that, because it's illegal. That said, instead of trying to crack that WPA/WPA2 (or greater) (if your having this issue with WEP, then you have more problems that I cannot help you with) why not just bypass it? This tool is pretty dated but it's still badass. There are other great tools that have evolved since it's inception like Reaver, and other tools that hack the WPS pin, instead of attacking the actual password, but I like this one the best. Kevin Mitnick, said that the weakest link in security is almost always the human factor, and for any of you who have actually been on a hack, or pentesting op, that's pretty fucking true. This goal can be accomplished with no overhead (like if using a Wifi Pineapple, from Hak5 [which btw is completely worth the money!]). Check out this page. Here is a snippet from said page: About Wifiphisher is a rogue Access Point framework for conducting red team engagements or Wi-Fi security testing. Using Wifiphisher, penetration testers can easily achieve a man-in-the-middle position against wireless clients by performing targeted Wi-Fi association attacks. Wifiphisher can be further used to mount victim-customized web phishing attacks against the connected clients in order to capture credentials (e.g. from third party login pages or WPA/WPA2 Pre-Shared Keys) or infect the victim stations with malwares. Wifiphisher is... ...powerful. Wifiphisher can run for hours inside a Raspberry Pi device executing all modern Wi-Fi association techniques (including "Evil Twin", "KARMA" and "Known Beacons"). ...flexible. Supports dozens of arguments and comes with a set of community-driven phishing templates for different deployment scenarios. ...modular. Users can write simple or complicated modules in Python to expand the functionality of the tool or create custom phishing scenarios in order to conduct specific target-oriented attacks. ...easy to use. Advanced users can utilize the rich set of features that Wifiphisher offers but beginners may start out as simply as "./bin/wifiphisher". The interactive Textual User Interface guides the tester through the build process of the attack. ...the result of an extensive research. Attacks like "Known Beacons" and "Lure10" as well as state-of-the-art phishing techniques, were disclosed by our developers, and Wifiphisher was the first tool to incorporate them. ...supported by an awesome community of developers and users. ...free. Wifiphisher is available for free download, and also comes with full source code that you may study, change, or distribute under the terms of the GPLv3 license. [Click and drag to move]
    3 points
  3. @AK-33 Sick build! I love how you totally have a case, but do not have a case. That design is awesome. Do you ever feel like it doesn't have enough protection? @cwade12c LOVE THE RGB... I am an RGB g00n myself (see down in the build). Not going to lie, I am super duper jelly of your 4 monitors, I currently only have one and need to at least get 2, you have 4. Love it. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- THIS IS MY FIRST TRUE BUILD -- THAT I DID ENTIRELY BY MYSELF I use this as my daily driver, for gaming and making YouTube videos. It's not super specked out in terms of CPU or GPU or anything like that, but to me, it's a very respectable unit that I've been dreaming of since I was a little kid. If you click on the video creator, you might find dozens of videos on the channel ;) In case you're interested in all the parts and how much they cost, the rig can be seen below: or you could find it yourself on PC parts picker: https://pcpartpicker.com/list/NBbVj2 You'll find that on PC part picker, it says there is some problems with the build. I'm not using an older version of the BIOS, I'm even using one better than 2203 "One SATA port is disabled" -- Ok, I got 5 others bro Yes, I actually had to carve out some of my fans and water cooler in order to get everything to fit.. so this was a valid error I guess XD I also have some more stuff in my "build" that PC parts picker doesn't have... more of the "cool streaming stuff" --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Thanks for taking a p33k at my build PL0X.
    3 points
  4. Awesome rig, @AK-33! The water cooling looks SICK! What are the specs? I love your family of laptops, @WarFox. Which laptop from the family is your favorite, and why? Also, good looks on the Run BSD stickers - I will consider requesting some if I run BSD in the future. Here's a 30 second video of my setup. The tower is not at all impressive, so I didn't show it off. I didn't do any fancy chassis or lights on my rig this round. Specs: Operating System Windows 10 Pro 64-bit (Dual Boot) Debian 64-bit (Dual Boot) CPU Intel Core i9 @ 3.60GHz Kaby Lake 14nm Technology RAM 32.0GB Motherboard Dell Inc. 0H0P0M (U3E1) Graphics LG ULTRAWIDE (2560x1080@60Hz) LG ULTRAWIDE (2560x1080@60Hz) HP VH240a (1080x1920@60Hz) HP VH240a (1080x1920@60Hz) Intel UHD Graphics 630 (Dell) 4095MB NVIDIA GeForce GTX 1070 (Dell) Storage 476GB KXG60ZNV512G NVMe TOSHIBA 512GB (SSD) 931GB Seagate ST1000DM010-2EP102 (SATA ) 931GB Western Digital WD My Passport 0820 USB Device (USB (SATA) ) 5589GB Western Digital WD My Book 25EE USB Device (USB (SATA) ) 930GB Western Digital WD My Book 1110 USB Device (USB (SATA) ) 4657GB Western Digital WD Game Drive USB Device (USB (SATA) (SSD))
    3 points
  5. My supervisor for my thesis told me about this site last year and it's one of the most valuable resources available that I know of. https://arxiv.org/ is a pre-print site where scientists upload their papers before they have been peer reviewed and published and it currently has over 1.9 million papers. This means that the papers on arXiv are often the same papers being published in reputable journals but they are not behind a paywall. These are pre-prints and have not been peer reviewed yet, but you can still read through them and analyze their methodology for yourself. I used a few papers from arXiv for my thesis on quantum resistant encryption algorithms.
    3 points
  6. Intro In the previous post, we looked at the scope of the series and the tools that will be required. In this post, we are going to be covering the most important piece of authoring Blu-rays: specifications. You can mux any video and audio input into a container file, burn any video and audio streams to a disc, encode any source to an output of your choosing and call it "HD" or Blu-ray compliant. That does not make it so. There are specifications that must be followed in order for your content to be deemed Blu-ray compliant. Compliance is important because if the media you author is Blu-ray compliant, you can be sure that it will work on any Blu-ray player. Specifications In order for your media to be considered Blu-ray compliant, the following rules must be followed. We are only going to concern ourselves with the Blu-ray spec at this time, which will exclude Ultra HD Blu-ray and Blu-ray 3D. Video Codecs: MPEG2 - Main Profile at High Level (MP@HL) or Main Profile at Main Level (MP@ML) h264 (AVC) - High Profile at 4.1/4.0 Level (HP@4.1/4.0) or Main Profile at 4.1/4.0/.3.2/3.1/3.0 Level (MP@4.1/4.0/3.2/3.1/3.0) h265 - High Profile at 4.1/4.0 Level (HP@4.1/4.0) or Main Profile at 4.1/4.0/.3.2/3.1/3.0 Level (MP@4.1/4.0/3.2/3.1/3.0) VC1 - Advanced Profile at Level-3 (AP@L3) or Advanced Profile at Level-2 (AP@L2) Video Frame Size: 1920×1080 29.97 frames interlaced / 59.94 fields (16:9) 1920×1080 25 frames interlaced / 50 fields (16:9) 1920×1080 24 frames progressive (16:9) 1920×1080 23.976 frames progressive (16:9) 1440×1080 29.976 frames interlaced / 59.94 fields (16:9) 1440×1080 25 frames interlaced / 50 fields (16:9) 1440×1080 24 frames progressive (16:9) 1440×1080 23.976 frames progressive (16:9) 1280×720 59.94 frames progressive (16:9) 1280×720 50 frames progressive (16:9) 1280×720 24 frames progressive (16:9) 1280×720 23.976 frames progressive (16:9) 720×480 29.97 frames interlaced / 59.94 fields (4:3/16:9) 720×576 25 frames interlaced / 50 fields (4:3/16:9) Audio Codecs: Dolby Digital (up to 5.1 channels with a maximum bitrate of 640 Kbit/s) Dolby Digital Plus (up to 7.1 channels with a maximum bitrate of 4.736 Mbit/s) Dolby Lossless (up to 9 channels with a maximum bitrate of 18.64 Mbit/s) DTS (up to 5.1 channels with a maximum bitrate of 1.5244 Mbit/s) DTS HD (up to 9 channels with a maximum bitrate of 24.5 Mbit/s) Linear PCM (up to 9 channels with a maximum bitrate of 27.648 Mbit/s) Subtitles Image bitmap subtitles (.SUP) Text subtitles (.SRT) Maximum Video Bitrate 40 Mbit/s Maximum Total Bitrate 48 Mbit/s Maximum Data Transfer Rate 54 Mbit/s I highly recommend reviewing the following resources to learn more about Blu-ray specifications and structure: http://www.hughsnews.ca/faqs/authoritative-blu-ray-disc-bd-faq/4-physical-logical-and-application-specifications https://www.videohelp.com/hd https://forum.doom9.org/showthread.php?t=154533 VideoHelp and doom9 will be your best friends. Use those resources. Background I can just toss the Blu-ray specs out there, but understanding is also important. We can blindly click on things, blindly pass arguments...or, make informed actions. Let's talk a little bit about H.264 AVC. You can think of H.264 as a family of profiles. Each profile has different rules relating to the encoding techniques and algorithms used to compress files. The Baseline profile is the primary profile used for mobile applications, video conferencing, and low powered devices. It benefits from achieving great compression ratios and other techniques like chrominance sampling and entropy coding techniques. The Main profile is the primary profile used for standard definition television broadcasts. It benefits from all of the Baseline profile enhancements, in addition to improved frame prediction algorithms. The High profile is the primary profile used for disc storage and high definition television broadcasts. It benefits from achieving the best compression ratios and using transformation techniques to reduce network bandwidth requirements by up to 50%. Profiles are proportional to the level of complexity required to encode/decode. Thus, higher complexity profiles require more CPU power. Levels are another type of configuration to set constraints on the encoder/decoder. The levels are a reflection of history, with H.264 evolving and growing as a standard. While profiles define rules for encoding techniques, levels place maximums on: Maximum decoding speed (Macroblocks/s) Maximum frame size (Macroblocks) Maximum video bitrate (Kbit/s) There are currently 20 levels, with the lowest level being Level 1 and the highest being Level 6.2. Level 1 defines constraints of: Maximum decoding speed of 1,485 Macroblocks/s Maximum frame size of 99 Macroblocks Maximum video bitrate of 64 Kbit/s Level 6.2 defines constraints of: Maximum decoding speed of 16,711,680 Macroblocks/s Maximum frame size of 139,264 Macroblocks Maximum video bitrate of 800,000 Kbit/s Thus, you arrive at resolutions ranging from 128x96 (Level 1) through 8,192x4,320 (Level 6.2). Now, when we look back at the Blu-ray specifications, you can use your knowledge of H.264 profiles and levels to choose appropriate encoding techniques and constraints that fall within the spec. Viewing Media Specifications with MediaInfo As you might imagine, it is important to always know the specifications of your audio and video. Therefore, having some sort of tool that can quickly show you this information in a presentable manner is an essential tool. There are quite a few tools for this, but one of the most popular ones that I like is MediaInfo. It is free open-source software that is simple to use. Download and install MediaInfo. Set your View. By default it is Basic. I really like Tree. Open a video or set of videos under File, and that's it! As we can see in this example, the media file I selected uses AVC and was encoded using x264. Things like the frame rate (23.976 Frames/s constant), Bitrate (2,741 Kb/s), resolution (720P), and encoding settings are quickly available. Here are the encoding settings that were used for this file: cabac=1 / ref=16 / deblock=1:0:0 / analyse=0x3:0x133 / me=umh / subme=10 / psy=1 / psy_rd=1.00:0.00 / mixed_ref=1 / me_range=32 / chroma_me=1 / trellis=2 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=0 / chroma_qp_offset=-2 / threads=8 / lookahead_threads=2 / sliced_threads=0 / nr=0 / decimate=0 / interlaced=0 / bluray_compat=0 / constrained_intra=0 / bframes=16 / b_pyramid=2 / b_adapt=2 / b_bias=0 / direct=3 / weightb=1 / open_gop=0 / weightp=2 / keyint=288 / keyint_min=23 / scenecut=40 / intra_refresh=0 / rc_lookahead=60 / rc=crf / mbtree=1 / crf=14.0 / qcomp=0.60 / qpmin=0 / qpmax=81 / qpstep=4 / ip_ratio=1.40 / aq=3:1.00 In the next tutorial, we will look at ripping from physical media, battling DRM, and destroying senseless region locks.
    2 points
  7. Okay, people ... show 'em if you got 'em! Meet the love of my life: Her name is Scimitar. Shoutout to the good people at Overkill Computers for building her for me!
    2 points
  8. Just for you guys, I reconfigured my PC to turn on the RGB. Its placed under a desk in the 'office' (at home), next to where my wife now also sits while she finishes her PHD. So the whole thing is pretty much hidden. I use 4 monitors with a KVM so that we can switch between the machines, otherwise we have 2 screens each, when we both are working. Specs: Intel Core i9-10900K ASUS ROG STRIX Z490-F GAMING Nvidia RTX 3090 TUF gaming OC Samsung 970 Evo Plus NVMe PCIe M.2 1TB Kingston SKC2500M81000G 1TB Seagate FireCuda SSHD 2TB (2016) Seagate Barracuda 2TB (2018) Kingston HyperX DDR4 3200 C18 4x16GB
    2 points
  9. There are some pretty badass resources out there for Shodan. A good place to start to really see some of the crazy shit you can do with it, and as well as to avoid a visit from the Department of Homeland Security, can be located here: This is a badass talk. Dan is a kick-ass Defcon speaker. Also, this quick guide will introduce you to shodan: https://www.hackeracademy.org/hacking-with-shodan-how-to-use-shodan-guide/ Here are some cool pentensting related projects, that use Shodan: https://awesomeopensource.com/projects/shodan
    2 points
  10. Just wanted to gather the opinions with others and also put out some of my thought. It seems like the big contenders in this field are Rust, Zig and D. I think also Nim is targeting the system space programming of C along with V lang. Of all of the languages, I personally like the syntax of D and the meta programming concepts. Pretty much looks like what C++ should have been. I also like how memory safety in the compiler is not default and instead has to be specified when to use it and when to not. Might help cut down on compile times. A function that does a simple calculation like maybe calculating an interest rate might only be using stack variables and nothing allocated on the heap, so it doesn't really need the memory safety features wasting time on it, but adding a node to a list might. I have dabbled some in Rust. Honestly, I don't like it. The syntax just seems a little overly complicated and I feel like a lot of words in the ecosystem are not in fact new concepts, but instead renaming concepts already present in computer science. One thing I do like about rust, the compiler is verbose which always helps with troubleshooting/debugging. I do also like that is catches when branches of execution are not being handled such as exception handling. Zig has gotten some buzz in the BSD community but I see little else mentioned about it elsewhere. However, it is not at a 1.0 release yet, so that could be a reason why. Overall, I don't think these languages will fully replace C. It is just so easy to port and get stuff bootstrapped. Not to mention the time and resources needed to re-implement something like the Linux kernel in 100% Rust or another language would take forever. I see the C language being timeless and always having a use case. Maybe it will lessen some with the like of Rust, D and Zig starting to come up, but we probably won't have a day in my lifetime where C code isn't at play somewhere.
    2 points
  11. We covered some of this in my Secure Software Engineering class. Lots of great info and lots of great tools out there. NIST is pretty awesome. SEI is also pretty amazing for looking up things dealing with code. For those unfamiliar, SEI has documentation for each language on common unsecure code snippets, why it is unsecure and better ways to write the code while achieving the same result. SEI for C as an example: https://wiki.sei.cmu.edu/confluence/display/c
    2 points
  12. HCL AppScan CodeSweep will try to detect vulnerabilities within your code each time you save your code. It comes as a VSCode extension or as a Github Action, so that it will scan code upon a pull request. It supports scanning files of the following types: Android-Java Angular Apex ASP.Net C C# Cobol ColdFusion Golang Groovy Infrastructure as Code Ionic JavaScript JQuery Kotlin MooTools NodeJS Objective-C Perl PHP PL/SQL Python React React Native Ruby Scala Swift T-SQL TypeScript VB.Net VueJS Xamarin VSCode Extension: https://marketplace.visualstudio.com/items?itemName=HCLTechnologies.hclappscancodesweep Github Action: https://github.com/marketplace/actions/hcl-appscan-codesweep
    2 points
  13. Introduction Hi all! I wanted to take some time to put together a comprehensive privacy guide with the goal of offering viable solutions to common services and software that are privacy oriented. When determining my recommendations and suggestions, I am mostly utilizing the following criteria: Follows the GNU four freedoms Services not based in mandatory key disclosure jurisdictions Audited or transparent Motivation "That's great Wade, but I don't have anything to hide." This is a fallacy I would like to disrupt. Voluntarily giving information away is perfectly reasonable, so long as one understands the costs/benefits and risks. There are security considerations that many people fail to realize when they suggest that privacy is not important. Humans can be the greatest vulnerability and easiest way to gain unauthorized access to a system; simply knowing information, especially that people voluntarily provide or publicly make available, can be valuable in the information gathering phases of an attack. An attacker can use this information to social engineer you or people related to you, causing potential financial damage to you or those around you. Some in the intelligence community suggest that reducing privacy is a necessary cost for increasing security. I look at this differently. Strong privacy goes hand-in-hand with security. I will attempt to demonstrate this in a related thread, Twenty+ Reasons Why Mass Surveillance is Dangerous. In the meantime, you are welcome to view my original publication on Packet Storm Security titled, Twenty Reasons Why Mass Surveillance is Dangerous. Additional resources I'd like to recommend on why privacy is important, to support my motiviation: The Value of Privacy by Bruce Schneier When Did You First Realize the Importance of Privacy? by EFF The Little Book of Privacy by Mozilla Table of Contents ---- Providers -------- Cloud Hosting -------- DNS ------------ Resolvers ------------ Clients -------- Email ------------ Hosts ------------ Clients -------- Image Hosting -------- News Aggregation -------- Search Engines -------- Social Networks -------- Text Hosting (Pastebin) -------- Video Hosting -------- Web Hosting ---- Software -------- Calendars and Contacts -------- Chat -------- Document and Note Taking -------- Encryption -------- File Sharing -------- Metadata Removal -------- Password Managers -------- Web Browsers ------------ Browser Specific Tweaks ------------ Browser Specific Extensions ---- Operating Systems and Firmware -------- Desktop -------- Mobile -------- Routers I will update this thread and table of contents as the subsidiary topics are created.
    2 points
  14. A couple weeks ago an organization called intigriti had a hacking challenge where people were to exploit an XSS vulnerability in this page: https://challenge.intigriti.io/ Unfortunately the competition is over and it has been solved in numerous different ways, but they left the page up, so you can still go test your skills! In case they ever take that down you can still access the code for the challenge, as well as multiple solutions and explanations, here: https://blog.intigriti.com/2019/05/06/intigriti-xss-challenge-1/
    2 points
  15. In my recent class, we did a series of languages from different paradigms to get an understanding of how they are used, pros/cons, etc. Here is some code I wanted to share from a portion of my homework for anyone who hasn't seen LISP. All in all, it is a pretty fun language to tinker with that I may end up doing some more on my on down the road. ; Adds two numbers and returns the sum. (defun add (x y) (+ x y)) ; Returns the minimum number from a list. (defun minimum (L) (apply 'min L)) ; Function that returns the average number of a list of numbers. (defun average (number-list) (let ((total 0)) (dolist (i number-list) (setf total (+ total i))) (/ total (length number-list)))) ; Function that returns how many times an element occures in a list. (defun count-of (x elements) (let ((n 0)) (dolist (i elements) (if (equal i x) (setf n (+ n 1)))) n)) ; Returns the factorial of a number using an interative method. (defun iterative-factorial (num) (let ((factorial 1)) (dotimes (run num factorial) (setf factorial (* factorial (+ run 1)))))) ; Using a recursive method, this function returns the factorial of a number. (defun recursive-factorial (n) (if (<= n 0) 1 (* n (recursive-factorial (- n 1))))) ; This function calculates a number from fibonacci sequences and returns it. (defun fibonacci (num) (if (or (zerop num) (= num 1)) 1 (let ((F1 (fibonacci (- num 1))) (F2 (fibonacci (- num 2)))) (+ F1 F2)))) ; Takes a list and returns all elements that occur on and after a symbol. (defun trim-to (sym elements) (member sym elements)) ; Returns the ackermann of two numbers. (defun ackermann (num1 num2) (cond ((zerop num1) (1+ num2)) ((zerop num2) (ackermann (1- num1) 1)) (t (ackermann (1- num1) (ackermann num1 (1- num2)))))) ; This function defines test code for each previous function. (defun test () (print (add 3 1)) (print (average '(1 2 3 4 5 6 7 8 9))) (print (minimum '(5 78 9 8 3))) (print (count-of 'a '(a '(a c) d c a))) (print (iterative-factorial 5)) (print (iterative-factorial 4)) (print (fibonacci 6)) (print (trim-to 'c '(a b c d e))) (print (ackermann 1 1))) ; Calls the test function. (test)
    2 points
  16. In my DnD group we've always tracked initiative on a white board, and it's always been a pain in the ass. We'd write down the names of everyone in the encounter, take note of their initiative scores, rewrite the whole list in order, and then we'd do all damage calculation by hand. It took way too long, and was always very anticlimactic. We'd be rushing through a cave to some epic music and, at the peak of excitement, the DM shouts "You're greeted by 5 viscous ancient dragons!!" and then we'd have to pause for 5-10 minutes while we fumble around with our white board, and even then the encounter itself would be a bit clumsy as we haphazardly try to figure out damage and whose turn it was. No more! Now there's a tool which will do all of that for you! (though soon after finishing this program I found out there are dozens of free mobile apps that do the same thing...) This tool is Object Oriented, and it keeps track of Mob objects in a linked list. Here is a screenshot: Clicking the bottom 3 buttons creates popup dialogues that you can use to enter the information. Here is the code: Main.java: GUI.java Mob.java There are some small limitations: There is no healing button. What you can do instead is just enter a negative number for damage. I could have easily added a healing button with only a few lines of code overall, but I felt that it would clutter the UI a bit for something that is virtually identical to the damage button. The program does not distinguish between NPCs and players. The only downside of this is that if a player "dies" then it doesn't prompt them to do their death saves. Hopefully your DM can pay enough attention to notice when the player is skipped in the order and just ask them to do it themselves though.
    2 points
  17. Here is a bit of an incomplete program I started. Well, the code I post works, but I had planned to extend this out. This calculator has a GUI and takes into account order of operations. The only issues that I've had with it, is that output can be a little wonky when answers are negative (such as 1- 9 * 9). At some point when I have time to work on it again, my original plan was to build in the functionality to input an equation and allow the user to specify a range of values that X can hold, it would compute it and output all of the results. And of course to add in more operations such as trig functions, etc. Essentially my end goal at some point is a calculator that could take the place of my graphing calculator. Main.java Calculator.java ParseCalculation.java
    2 points
  18. I am currently about to finish a class in my course work that deals with digital logic at a very basic level. So, I would like to share a little bit of knowledge of what I have learned. Data Representation in a Computer Communicating digital can be traced back to the days of Samuel Morse and the invention of the telegraph. Communicating over long distances via wire required some sort of standardized system of communication. Samuel Morse developed the famous system that we know as Morse code to facilitate communication. On paper, the language is represented by a series of dots and dashes, coming from a speaking, it is represented by long (DAH) and short (DIT) beeps. By standard convention, a "dah" has a width of 3 "dits." (A) .- [DIT - DAH] (B) -... [DAH - DIT - DIT - DIT] While used in the telegraph, it was not implemented in computers but have become a real world pre-computer example of how information could be stored. Morse code is created to travel across a wire by turning current on and off along a wire, which is generally created by a telegraph operator tapping a metal paddle on to a metal surface.Essentially, just being a switch. Morse code didn't become the standard of data representation, instead binary logic was chosen instead. Representing information in 1s and 0s instead of long and short audio beeps. A "HIGH" voltage is generally considered to be represented by "1" or also known as "ON/TRUE." "LOW" voltage is represented by a "0" or "OFF/FALSE." Now, binary is more than just a convention, it is an actual way of doing mathematics. We conventionally use a "base ten", also known as the "decimal system" ( system of counting (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10). Binary is a "base two" system where there are essentially only two unique characters that make up the whole number system which can be repeated to make more complex number. Look at the bubble below: 0 (zero) 1 (one) 10 (two) 11 (three) 100 (four) Each individual symbol is dubbed a "bit" and can only represent 2 possible values. So "10" is 2 bits in width. A quick way to see how many possible values a certain number of bits can represent, we can do a quick calculation of 2 to the power of some bits. 2^2 = 4 00 01 10 11 2^3 = 8 000 (zero) 001 (one) 010 (two) 011 (three) 100 (four) 101 (five) 110 (six) 111 (seven) The bottom line is that information in computers can be represented as a switch or series of switched. Imagine we have a battery connected to 8 light bulbs with a switch between the battery and each light switch. So, we have 8 light bulbs and 8 light switches. Using binary, we can represent 2^8 numbers starting from zero by switching on lights. Light bulbs that are lit represent a 1, light bulbs that are not lit represent a 0. Computers at the very basic level are a system of switches that perform operations on switches to change the system's state. Boolean Algebra and Truth Tables Boolean was a man who had a goal of being able to relate human decision making to mathematical logic. He wanted to develop a mathematical way of expressing logic. Thus, he developed what we call Boolean Algebra. This form of algebra uses typical math symbols that we are all used to seeing, however they have a different meaning. In this form of math, a state of a machine, or a decision being made is equal to an equation of variables that take on certain behavior based on the state of inputs and their relation to one another. A gppd way to explain this is to take a look at an example and break it down. F = ab + c'b F is the output of an equation. On the right side of the equals sign, we have three variable (a b c). "ab" is an expression that represents multiplication, which in boolean algrebra is representative of "and." The addition symbol is representative of an "or." An apostrophe is means an inversion. Any value that is not inverted is assumed to represent true. An inversion of a value means false. This system will also assume that F is true. This is how we can read this. F is true if a and b are true or c is not true and b is true A quick table for reference: * AND + OR ' NOT Now, another way we can represent data is a more easy way is a truth table. In the follow link is a file named "truthtables.pdf" with three sample truth tables. Each column represents an input or output. The top row of each table is a label, then underneath is the state of that input or output. Schematics of a Basic Digital Circuit Now that some base information has been established, now basic circuits that a system can use to execute logic can be discussed. For creating digital logic circuits in class, I used Logisim, which is what I will be using for create examples. There are three main basic components in a digital circuit, these are made up of transistors. The construction of them from transistors is outside of the scope of this post. These three main components are the ones discussed in the previous section; AND, OR, and NOT. Here are two images, the first is how the gate are represented. The second image are the truth tables to explain the logical operation that is being performed by each type of gate. Using our boolean equation and a truth table is a quick way to prototype a digital circuit. Here is a drawing of logism of the circuit that would produce the same results as the equation "F = ab + c'b." Recreating this circuit in logism or a similar program, we can see that the behavior of this circuit matches what our truth table says. To do this for any boolean expression, simply take the inputs and connect them with their appropriate gates. In the case of "ab," the inputs named a and b are connected to the two input pins of the AND gate. The output is fed to an OR gate that is represented by the addition symbol. Input c is inverted by a NOT gate, the output of the NOT gate is fed into an input of another AND gate that takes another input that comes from b. The output of this second AND gate is also fed into the OR gate. If either of these AND gates outputs true, then the output of the circuit (F) will be true.
    2 points
  19. ether-vote A decentralized voting application using the Ethereum blockchain architecture. Features Initialize a collection of candidates who will be applying for a position Votes are stored on the blockchain No central authority is required to trust Goals This current version is a proof of concept. Voting systems can serve as a building block for many complex decentralized applications. In the future, the following goals will be completed: Rebuild the app using the Truffle framework Provide clear instructions for deploying the dapp to a testnet Add the ability to interact with the smart contract from the command line Implement a voting token (with a limited supply) into the smart contract Implement a payment system into the dapp that would allow users to buy/sell voting tokens Code EtherVote.sol pragma solidity ^0.4.11; contract EtherVote { mapping (bytes32 => uint8) public numberOfVotesReceived; bytes32[] public listOfCandidates; function EtherVote(bytes32[] candidates) { listOfCandidates = candidates; } function isValidCandidate(bytes32 candidate) returns (bool) { for(uint index = 0; index < listOfCandidates.length; index++) { if(listOfCandidates[index] == candidate) { return true; } } return false; } function getTotalVotesForCandidate(bytes32 candidate) returns (uint8) { require(isValidCandidate(candidate)); return numberOfVotesReceived[candidate]; } function setVoteForCandidate(bytes32 candidate) { require(isValidCandidate(candidate)); numberOfVotesReceived[candidate] += 1; } } .bowerrc { "directory": "web/vendor/" } bower.json { "name": "ether-vote", "appPath": "web", "version": "0.0.1", "dependencies": { "lodash": "~4.17.4", "bootstrap": "v4.0.0-alpha.6", "less": "~2.7.2" } } package.json { "name": "ether-vote", "version": "0.0.1", "devDependencies": { "ethereumjs-testrpc": "^4.1.1", "web3": "^0.20.1", "solc": "^0.4.16" } } Usage (Node) To retrieve the number of votes for a given candidate: contractInstance.getTotalVotesForCandidate.call('Holo'); To cast a vote for a particular candidate: contractInstance.setVoteForCandidate('Kurisu', {from: web3.eth.accounts[1]}); Installation ether-vote requires Node.js and bower to run. Step 1 - Install the frontend dependencies: bower install Step 2 - Install the node modules: npm install Step 3 - Run testrpc node_modules/.bin/testrpc This will generate 10 keypairs (public addresses / private keys) that each have 100 Ether for testing purposes. For example: EthereumJS TestRPC v4.1.1 (ganache-core: 1.1.2) Available Accounts ================== (0) 0x3853246f7dd692044b01786ea42a88197f6dfef9 (1) 0x1067092bee809c703ed33c11cc2ca3f3d3e33f1f (2) 0x4b9ad5d76fc3abe51d02fa9c631fe2e6dd21261a (3) 0xbe5dacc37242be5ca41baa25a88657e73fbae2c1 (4) 0x8afc23d930072c286c31a22d6ec5cb9330acd51e (5) 0x21deb9442d2ac8aefdeaf4521e568a98de3ebb6f (6) 0x39c9c3fffaff694388354aa40d22236ff102cb01 (7) 0x6927e56ae99f8a9531eaa5769486f0d9c67f1d07 (8) 0x65ad95852c58d7a9ab6177a55aa50f4c98507a83 (9) 0xb963574b692ace8f3f392531ba46788258d19eb6 Private Keys ================== (0) fb1e07512bfa729237496733dce0ba217356aaa5c14aecf3cecc317042bc77cc (1) 1b504d05041f1513c14dda6cfcced3b28ae5a47e33a75ce84a5d724adef69f6a (2) e5756fb44810101d141443a4f20d21dbb7ddfb79157a447721a3fc8a118934bc (3) bf811c983a80f53ec805bb956720946672a45e6739fe9d34f8099855f3658f17 (4) 681a0a2d42087966db7ca600f92c9b375f87b2e6dfae53e9358dbf54f3e26fc8 (5) b104ed383582580eae090a6d883307245d67d338db9e988312c28a30c61b543a (6) f74738475aef7b0340f902ea85c0900831b1e1b337bc0f0891e56540eed26491 (7) 96dfa361e52f3f45b24a058846ea6df844f8a89842ef83855309bb0c7827913f (8) 9cb8adf3b2e5026582b20f0c65aae2c2c4f6adb3e406cd3a52df93050a5b12fe (9) 4b152799a199aa7200432698d14aa80f970232ee0c97809e45b87880814dad65 HD Wallet ================== Mnemonic: drama aspect juice culture foot federal frequent pizza hawk giggle tenant happy Base HD Path: m/44'/60'/0'/0/{account_index} Listening on localhost:8545 Step 4.0 - Run node Step 4.1 - Include web3.js Web3 = require('web3'); web3 = new Web3(new Web3.providers.HttpProvider("http://127.0.0.1:8545")); Step 4.2 - Set the output of EtherVote.sol to a variable smartContract = fs.readFileSync('EtherVote.sol').toString(); Step 4.3 - Compile the contract using solc solc = require('solc'); compiledCode = solc.compile(smartContract); The output will return a JSON object that contains important information like the Ethereum Contract Application Binary Interface (ABI) and smart contract bytecode. For example: { contracts: { ':EtherVote': { assembly: [ Object ], bytecode: '6060604052341561000f57600080fd5b6040516103dc3803806103dc833981016040528080518201919050505b806001908051906020019061004292919061004a565b505b506100c2565b82805482825590600052602060002090810192821561008c579160200282015b8281111561008b57825182906000191690559160200191906001019061006a565b5b509050610099919061009d565b5090565b6100bf91905b808211156100bb57600081600............continued............', functionHashes: [ Object ], gasEstimates: [ Object ], interface: '[{"constant":true,"inputs":[{"name":"","type":"bytes32"}],"name":"numberOfVotesReceived","outputs":[{"name":"","type":"uint8"}],"payable":false,"stateMutability":"view","type":"function"},............continued............]', metadata: '{"compiler":{"version":"0.4.16+commit.d7661dd9"},"language":"Solidity","output":{"abi":[{"constant":true,"inputs":[{"name":"","type":"bytes32"}],"name":"numberOfVotesReceived","outputs":[{"name":"","type":"uint8"}],............continued............}]}', opcodes: 'PUSH1 0x60 PUSH1 0x40 MSTORE CALLVALUE ISZERO PUSH2 0xF JUMPI PUSH1 0x0 DUP1 REVERT JUMPDEST PUSH1 0x40 MLOAD PUSH2 0x3DC CODESIZE SUB DUP1 PUSH2 0x3DC DUP4 CODECOPY DUP2 ADD PUSH1 0x40 MSTORE DUP1 DUP1 MLOAD DUP3 ADD SWAP2 SWAP1 POP POP JUMPDEST DUP1 PUSH1 0x1 SWAP1 DUP1 MLOAD SWAP1 PUSH1 0x20 ADD SWAP1 PUSH2 0x42 SWAP3 SWAP2 SWAP1 PUSH2 0x4A JUMP JUMPDEST POP JUMPDEST POP PUSH2 0xC2 JUMP JUMPDEST DUP3 DUP1 SLOAD DUP3 DUP3 SSTORE SWAP1 PUSH1 0x0 MSTORE PUSH1 0x20 ............continued............ ', runtimeBytecode: '60606040526000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff1680630d8de22c1461006a5780633898ac29146100ab5780638c1d9f30146100ec57806392d7df4a1461012b578063dcebb25e1461016a575b600080fd5b34156100............continued............', srcmap: '2', srcmapRuntime: '', sourceList: [ '' ], sources: { '': { AST: [ Object ] } } } } Step 5.0 - Create an ABI definition object by passing in the ABI definition as JSON from the compiledCode object that was created in Step 4.3. Then, pass this ABI definition object to the web3.eth.contract function in order to create an EtherVote object. abiDefinition = JSON.parse(compiledCode.contracts[':EtherVote'].interface); EtherVoteContract = web3.eth.contract(abiDefinition); Step 5.1 - Save the byteCode object from the compiledCode object to a variable, as we will use this when calling our EtherVoteContract's prototypical .new() function byteCode = compiledCode.contracts[':Voting'].bytecode; Step 5.2 - Deploy the smart contract to the Ethereum blockchain by invoking EtherVoteContract.new(...), which takes in two parameters: The first parameter are the values for the constructor - in this case, our list of candidates to vote for The second parameter is an object that contains the following properties: Property Description data The compiled bytecode that will be deployed to the Ethereum blockchain from The address that will deploy the smart contract gas The amount of money that will be offered to miners in order to include the code on the blockchain deployedContract = EtherVoteContract.new(['Kurisu', 'Holo', 'Rin', 'Haruhi', 'Mitsuha'], { data: byteCode, from: web3.eth.accounts[0], gas: 4700000 } ); Step 5.3 - Create an instance of the smart contract by invoking the at function on the EtherVoteContract object, passing in the address property from the deployedContract object that was created in Step 5.2 contractInstance = EtherVoteContract.at(deployedContract.address); Congratulations, you are now ready to interact with the dapp! (See: Usage above)
    2 points
  20. This is a program that I wrote a few years ago in order to test a theory that I read online. I read on some website that you could calculate the value of pi by throwing hot dogs on the floor which absolutely blew me away. I couldn't believe it, so I decided to test it. I wrote a program to simulate throwing 1 billion hot dogs on the floor and by golly let me tell you, they're right. Here's how: (Technically this works with any stick-like object.) Let x be the length of our object (hot dog in our case). You must then draw lines on the floor perpendicular to the direction you're facing which are all x length apart. This elegantly drawn image demonstrates what I mean flawlessly: The number of hot dogs which landed on a line divided by the total number of hot dogs thrown is an approximation for pi. Like I said, I simply refused to believe that something so simple could be possible so I wrote a program to simulate the process: #!/usr/bin/perl -w use strict; my($dist, $lower, $upper, $lenComponent, $approx); my $len = 6; my $throws = 1000000; #CHANGE TO WHAT YOU WANT my $intersects = 0; for(1..$throws){ $dist = rand(180); #arbitrary maximum throwing distance $lenComponent = sin(rand(6.28318530718))*$len; #trig with up to 2pi radians rotation $lower = $dist - ($lenComponent/2); $upper = $lower + $lenComponent; for(my $line = 0; $line<=($dist+$len); $line+=$len){ if($line>=$lower and $line<=$upper){ ++$intersects; last; } } } $approx = (1/$intersects)*$throws; print "Pi is approximately: $approx"; And I ran the program overnight with 1 BILLION hot dogs, which yielded this result: 3.14154932843791 VS 3.14159265358979 Error: 0.00004332515 Wowza! I also wrote a second version of the program which uses multi-threading to throw the hot dogs faster. It was actually a neat exercise because I wrote it such that all of the threads can edit the same variable which counts the total number of intersections. Code: #!usr/bin/perl -w use strict; use threads; use threads::shared; my $intersects :shared = 0; my $throws = 10000000; my @threads = (); sub hotdog{ my($dist, $lenComponent, $lower, $upper); my $len = 1; for(1..$throws){ $dist = rand(5); #arbitrary maximum throwing distance $lenComponent = sin(rand(6.28318530718))*$len; #trig with up to 2pi radians rotation $lower = $dist - ($lenComponent/2); $upper = $dist + ($lenComponent/2); for(my $line = 0; $line<=($dist+$len); $line+=$len){ if($line>=$lower and $line<=$upper){ lock($intersects); ++$intersects; last; } } } } for(1..10){ push (@threads, threads->create(\&hotdog)); } $_->join foreach @threads; print "Pi is approximately: ".(($throws*scalar(@threads))/$intersects);
    2 points
  21. IMO this is only a mind map of passive reconnaissance resources. And that's only half of the first step of Lockheed Martin's cyber killchain. Not to diminish the usefulness of the link at all. As far as passive reconnaissance goes the resources mentioned look quite comprehensive. You could, for example, prepare an initial dossier report to hand off to another active recon team so they could map the company profile to a network topology. This step is indispensable for a large APT, but to say that this covers all the steps is an exaggeration. Because it throws in links to malware analysis resources, and exploit archives, one could get confused about this link and think it was appropriately covering all the resources, but it's by no means its strong point. Again, I don't intend to diminish the usefulness of this link at all! Passive information gathering is the most important step to a large scale APT, yet it's the most glossed over subject in every security course! If you check out Sparc FLOW's "how to hack like a god" and some of his other books, he actually gives some emphasis on casing your target. AND NO WONDER! His books are actually just case studies!
    2 points
  22. Download all of the released NSA documents (continuously updating) with two scripts. Very hacky, but gets the job done. DEPENDS ON LYNX. (Why? Because I'm lazy) $ apt install lynx nsadl.sh #!/bin/bash echo 'Scraping links from Primary Sources...' lynx -dump "https://www.eff.org/nsa-spying/nsadocs" | grep "https://www.eff.org/document" | awk '/http/{print $2}' > links echo 'Done. Links saved as "links.txt"' echo 'Downloading .pdf documents using "links.txt" -- this may take awhile...' while read line do name=$line sh scraper.sh $name done < links echo 'All done!' scraper.sh #!/bin/bash STR="`wget --quiet -O - $1 | grep -Eo 'https://www.eff.org/files/[0-9]+/[^"]+\.pdf';`" wget --no-clobber --quiet $STR Usage: $ sh nsadl.sh; echo 'Have fun!'
    2 points
  23. Hey, I've been playing around with Stable Diffusion lately, and I figured I'd just dump a bunch of links for how people can get started and all the different ways to use it. Common fork that lets you use the model in a browser UI: https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases Weights for the 2.1 version: https://huggingface.co/stabilityai/stable-diffusion-2-1 Amazing site with thousands of different fine tuned models that you can download: https://civitai.com/ This is the Github page for Stable Riffusion. A model that creates music by generating audio spectrogram images: https://github.com/riffusion/riffusion-app Riffusion weights: https://huggingface.co/riffusion/riffusion-model-v1
    1 point
  24. Just a heads up for people curious. I just pushed some commits for a new Makefile for compiling on x86_64 using the GCC tool chain. To make it work, your going to have to install arm's gcc toolchain. You can get the linux binaries from arm. I then moved them on my machine to `/usr/bin`. However, you can change the path to where you want them to be.
    1 point
  25. A lot of people use "people" search engines and Google dorks to find people or information about people, but you can actually find out quite a bit of information via public registries. Consider some of these: The Knot - Wedding Registry Search RegistryFinder - Baby Shows and Graduation Search MyRegistry - Wedding, Baby, and Gift List Search Amazon - Registries for Any Occasion Search Bed, Bath, and Beyond - Gift Registry Search The Bump - Baby Registry Search You can also find out PII of anyone in the United States who is registered to vote, by looking at local election registries. Does anyone know other registries to include?
    1 point
  26. I have found often that non profits will pop up that display election results some of the states have already got their info off this one but it may be worth a try https://voteref.com/
    1 point
  27. Basic algorithm for RSA key generation 1. Choose 2 large primes and make those p and q 2. Let N = p * q 3. Let T = (p-1)(q-1) resulting in the Euler Totient 4. Choose 2 numbers, e and d, where (e*d) mod T = 1 5. Let the public key be (e, N) 6. Let the private key be (d, N) A few details about my implementation Primes are generated by a crate called 'num_primes". Values e, d are selected by letting e = 0 and d = T, looping until the condition (e*d) mod T = 1. If the condition is not true, add one to e and subtract one to d. Code Cargo.toml [package] name = "rust-rsa-fun" version = "0.1.0" edition = "2021" # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html [dependencies] num-primes = "0.3.0" main.rs use num_primes::{Generator, BigUint}; struct PublicKey { n: BigUint, e: u64, } struct PrivateKey { n: BigUint, d: u64, } impl PublicKey { pub fn new(n: BigUint, e: u64) -> Self { PublicKey { n: n, e: e, } } pub fn print(&self) { println!("Public Key"); println!("\tN: {}", self.n); println!("\te: {}", self.e); } } impl PrivateKey { pub fn new(n: BigUint, d: u64) -> Self { PrivateKey { n: n, d: d } } pub fn print(&self) { println!("Private Key"); println!("\tN: {}", self.n); println!("\td: {}", self.d); } } fn calc_e_d(e: u64, d: u64, t: &BigUint) -> u64 { let T: u64 = t.bits().try_into().unwrap(); (e * d) % T } fn find_d_e(t: &BigUint) -> Option<(u64, u64)> { let mut d: u64 = t.bits().try_into().unwrap(); let mut e: u64 = 2; let one: u64 = 1; while calc_e_d(e, d, &t) != one { if d == 2 { return None; } e = e + one; d = d - one; } Some((e, d)) } fn main() { let p = Generator::new_prime(64); let q = Generator::new_prime(64); println!("Let p = {p}"); println!("Let q = {q}"); let N = &p * &q; println!("Let N = {N}"); let one: u64 = 1; let T = (p - one) * (q - one); println!("Let T = {T}"); let keys = if let Some(k) = find_d_e(&T) { k } else { println!("Could not find numbers e and d"); return; }; let public_key = PublicKey::new(N.clone(), keys.0); let private_key = PrivateKey::new(N.clone(), keys.1); public_key.print(); private_key.print(); }
    1 point
  28. Here is NIST's announcement post. They've chosen the first 4 encryption algorithms that will become part of their post-quantum cryptographic standard for encryption algorithms which are resistant to attacks by quantum computers. The algorithms, their functions, and the mathematics they're based on are as follows: General Encryption: CRYSTALS-Kyber - Lattice-based Digital Signatures: CRYSTALS-Dilithium - Lattice-based FALCON - Lattice-based SPHINCS+ - Hash functions As you can see, 3 of the 4 chosen are Lattice-based while the final one is Hash-based. There are 4 more encryption algorithms that are still under consideration, and none of them are hash or lattice based.
    1 point
  29. Examples of IT security frameworks COBIT Control Objectives for Information and Related Technology (COBIT) is a framework developed in the mid-90s by ISACA, an independent organization of IT governance professionals. ISACA currently offers the well-known Certified Information Systems Auditor (CISA) and Certified Information Security Manager (CISM) certifications. This framework started out primarily focused on reducing technical risks in organizations, but has evolved recently with COBIT 5 to also include alignment of IT with business-strategic goals. It is the most commonly used framework to achieve compliance with Sarbanes-Oxley rules. ISO 27000 series The ISO 27000 series was developed by the International Standards Organization. It provides a very broad information security framework that can be applied to all types and sizes of organizations. It can be thought of as the information security equivalent of ISO 9000 quality standards for manufacturing, and even includes a similar certification process. It is broken up into different substandards based on the content. For example, ISO 27000 consists of an overview and vocabulary, while ISO 27001 defines the requirements for the program. ISO 27002, which was evolved from the British standard BS 7799, defines the operational steps necessary in an information security program. Many more standards and best practices are documented in the ISO 27000 series. ISO 27799, for example, defines information security in healthcare, which could be useful for those companies requiring HIPAA compliance. New ISO 27000 standards are in the works to offer specific advice on cloud computing, storage security and digital evidence collection. ISO 27000 is broad and can be used for any industry, but the certification lends itself to cloud providers looking to demonstrate an active security program. NIST Special Publication 800-53 The U.S. National Institute of Standards and Technology (NIST) has been building an extensive collection of information security standards and best practices documentation. The NIST Special Publication 800 series was first published in 1990 and has grown to provide advice on just about every aspect of information security. Although not specifically an information security framework, other frameworks have evolved from the NIST SP 800-53 model. U.S. government agencies utilize NIST SP 800-53 to comply with the Federal Information Processing Standards' (FIPS) 200 requirements. Even though it is specific to government agencies, the NIST framework could be applied in any other industry and should not be overlooked by companies looking to build an information security program. NIST Special Publication 800-171 NIST SP 800-171 has gained in popularity in recent years due to the requirements set by the U.S. Department of Defense that mandated contractor compliance with the security framework by December 2017. Cyberattacks are occurring throughout the supply chain, and government contractors will find their systems and intellectual property a frequent target used to gain access into federal information systems. For the first time, manufacturers and their subcontractors now have to implement an IT security framework in order to bid on new business opportunities. NIST SP 800-171 was a good choice for this requirement as the framework applies to smaller organizations as well. It is focused on the protection of Controlled Unclassified Information (CUI) resident in nonfederal systems and organizations, which aligns well with manufacturing or other industries not dealing with information systems or bound by other types of compliance. It may not be a good fit by itself for industries dealing with more sensitive information such as credit cards or Social Security data, but it is freely available and allows for the organization to self-certify using readily available documentation from NIST. The controls included in the NIST SP 800-171 framework are directly related to NIST SP 800-53, but they are less detailed and more generalized. It is still possible to build a crosswalk between the two standards if an organization has to show compliance with NIST SP 800-53 using NIST SP 800-171 as the base. This allows a level of flexibility for smaller organizations that may grow over time as they need to show compliance with the additional controls included in NIST SP 800-53. NIST Cybersecurity Framework for Improving Critical Infrastructure Cybersecurity The NIST Cybersecurity Framework for Improving Critical Infrastructure Cybersecurity is yet another framework option from NIST. It was recently developed under Executive Order (EO) 13636, "Improving Critical Infrastructure Cybersecurity" that was released in February 2013. This standard is different in that it was specifically developed to address U.S. critical infrastructure, including energy production, water supplies, food supplies, communications, healthcare delivery and transportation. These industries have all found themselves targeted by nation-state actors due to their strategic importance to the U.S. and must maintain a higher level of preparedness. The NIST Cybersecurity Framework differs from the other NIST frameworks in that it focuses on risk analysis and risk management. The security controls included in this framework are based on the defined phases of risk management: identify, protect, detect, respond and recovery. These phases include the involvement of management, which is key to the success of any information security program. This structured process allows the NIST Cybersecurity Framework to be useful to a wider set of organizations with varying types of security requirements. CIS Controls (formerly the SANS Top 20) The CIS Controls exist on the opposite spectrum from the NIST Cybersecurity Framework. This framework is a long listing of technical controls and best practice configurations that can be applied to any environment. It does not address risk analysis or risk management like the NIST Cybersecurity Framework, and is solely focused on hardening technical infrastructure to reduce risk and increase resiliency. The CIS Controls are a welcome addition to the growing list of security frameworks because they provide direct operational advice. Information security frameworks can sometimes get caught up on the risk analysis treadmill but don't reduce overall organizational risk. The CIS Controls pair well with these existing risk management frameworks to help remediate identified risks. They are also a highly useful resource in IT departments that lack technical information security experience. HITRUST CSF It is well known that the HITECH/HIPAA Security Rule has not been successful in preventing data breaches in healthcare. The original HIPAA compliance requirements were written in 1996 and set to apply to a broad set of technologies and organizations. More than 230 million people in the U.S. have had their data breached by a healthcare organization, according to the Department of Health and Human Services. The overly general requirements included HIPAA and the lack of operational direction as partly to blame for this situation. HITRUST CSF is attempting to pick up where HIPAA left off and improve security for healthcare providers and technology vendors. It combines requirements from almost every compliance regulation in existence, including the EU's GDPR. It includes both risk analysis and risk management frameworks, along with operational requirements to create a massive homogenous framework that could apply to almost any organization and not just those in healthcare. The only bad choice among these frameworks is not choosing any of them. HITRUST is a massive undertaking for any organization due to the heavy weighting given to documentation and processes. Many organizations end up scoping smaller areas of focus for HITRUST compliance as a result. The costs of obtaining and maintaining HITRUST certification adds to the level of effort required to adopt this framework as well. However, the fact that the certification is audited by a third party adds a level of validity similar to an ISO 27000 certification. Organizations that require this level of validation may be interested in the HITRUST CSF. The beauty of any of these frameworks is that there is overlap between them so "crosswalks" can be built to show compliance with different regulatory standards. For example, ISO 27002 defines information security policy in section 5; COBIT defines it in the section "Plan and Organize;" Sarbanes-Oxley defines it as "Internal Environment;" HIPAA defines it as "Assigned Security Responsibility;" and PCI DSS defines it as "Maintain an Information Security Policy." By using a common framework like ISO 27000, a company can then use this crosswalk process to show compliance with multiple regulations such as HIPAA, Sarbanes-Oxley, PCI DSS and GLBA, to name a few. IT security framework advice The choice to use a particular IT security framework can be driven by multiple factors. The type of industry or compliance requirements could be deciding factors. Publicly traded companies will probably want to stick with COBIT in order to more readily comply with Sarbanes-Oxley. The ISO 27000 series is the magnum opus of information security frameworks with applicability in any industry, although the implementation process is long and involved. It is best used, however, where the company needs to market information security capabilities through the ISO 27000 certification. NIST SP 800-53 is the standard required by U.S. federal agencies but could also be used by any company to build a technology-specific information security plan. The HITRUST CSF integrates well with healthcare software or hardware vendors looking to provide validation of the security of their products. Any of them will help a security professional organize and manage an information security program. The only bad choice among these frameworks is not choosing any of them. Source
    1 point
  30. Shodan is an Internet of Things search engine that allows you to search and scan a wide variety of devices using a wide array of filters. Some will limit their information gathering to things that they see on the web. You can go beyond this, and Shodan is a tool to help with that: phones, controllers, refrigerators, etc. Shodan has powerful dashboards, community curated filters, and a powerful API to let you plug right into their platform. Here is a HackerSploit video covering some of the basics of Shodan: And if you want to check out the engine for yourself...well, here you go! Link to website: https://www.shodan.io/
    1 point
  31. Speaking of RGB....dude.... Even your RGB has RGB! What kind of stream deck is that? The matrix confetti was nuts. I'm also really digging the amount of effort that you put into your "logo" or your Klu brand, and the amount of customization put into your rig. It looks great, dude. That's absolutely epic!
    1 point
  32. @cwade12c Your 4-screen setup puts mine to shame. Are they all connected to the same graphics card? The specs for my rig: The prices were accurate as of 2019.
    1 point
  33. Shodan is crazy powerful. My advice in using it would be: always think about it, before engaging in your next action.
    1 point
  34. I am not familiar with the chipset but a quick search does show that the wifi adapter supports dual band. If you can only see 2.4ghz, naturally you will be able to see less if some potential access points decided (for any reason) to disable the 2.4ghz band. Before running airodump, be sure to verify that you are in monitor mode. There are also some options for seeing if there are any problems with your setup that prevents your card from entering monitor mode. You'll want to use airmon-ng <check|check kill> for this check and refer to the documentation for more information. airmon-ng start <interface> airmon-ng check kill airodump-ng <options> <monitor-interface> Also, what does your output look like when you start in monitoring mode? If you are concerned with a dual boot because of malware, there are ways you can jail or isolate your environments or malware. Look into sandboxing, for example. If you want to quickly test the hypothesis that it might be a VM configuration issue, do a live boot. Throw Kali on a usb drive, and boot directly from the usb drive. You'll be prompted to do a Live Boot or Install kali. Do a Live Boot. It won't impact your main operation system.
    1 point
  35. College today is largely a scam (speaking as someone with a Bachelors, Masters and seriously contemplating starting a PhD). Unless you're going into STEM or a field that by law requires a degree, I actually don't see the point vis-à-vis the debt incurred. The democratization of access to online education has changed the game. I also subscribed to Brilliant Premium for a year. It's novel, good, but not worth a resub imo.
    1 point
  36. C++ code here: https://github.com/haxme/canbus_cplus_plus C code here: https://github.com/haxme/canbus_c.git
    1 point
  37. So a few months ago, I heard a podcast where a person was talking about how helpful it is to do something as simple as implement an HTTP server in C. So, I decided to embark on this quest when school got a little more quieter for the summer (I am just taking a light load). I ended up doing this for a few reasons. First, as a learning experience. How better to get to know HTTP and how websites work than implementing a web server? Next was dog fooding my own code. Making something I can use. I could get the experience of writing it, but also the experiences of using my own code. And of course, implementing my own features natively in C. Lastly, I figured it would make an interesting resume project. Why C? Well, its a little closer to the metal and requires the user to get more intimate with the inner workings. Here is the code on my github. Note, as of this posting, I still have some cleaning up to do. But it passes all memory checks on valgrind and seems to be running fine. It is single threaded and does not yet use non-blocking IO. https://github.com/martintc/HttpServer Website I currently have it deployed to for testing: http://martintc.tech Thing I plan to do and improve on (and lessons learned): Implementing my own garbage collector to simplify the memory model. handling PUT request methods. Implement SSL/TLS Implement multi-threaded and non-blocking IO
    1 point
  38. I accidentally discovered this today with @killab and thought it was pretty neat. E2E encrypted file uploads that supports streaming encryption/decryption. Potentially useful for quickly sharing disposable files, you can read more about the security here: https://wormhole.app/security URL: https://wormhole.app/
    1 point
  39. @cwade12c I also have been a long time user. Actually, you guys got me into arch and such when I was in highschool all those years ago. But yea, recently have gotten on the BSD train. What do I like about it? It is stricter to adhering to UNIX philosophy and is a direct descendant of UNIX, unlike Linux which was a clean-room re-implementation influenced by minix. The system designs are simpler. All of the BSDs have great documentation compared to 99% of linux distributions. The BSDs are each their own independent OS. Binary compatibility is not the same across and each have taken their own routes (for instance, DragonFly BSD is transitioning to a microkernel). The major selling point for me is they also own their stack from bootloader to kernel to userland, which makes it a really solid cohesive system. In contrast, the linux kernel is its own independent project of GNU userland tools, they just happen to so benefit by working together. The communities are also close knit compared to Linux and less toxic. Within a couple of weeks of running NetBSD, I was working with the audio dev of NetBSD to test a patch of their for a machine of mine. I do believe their developers (BSD in general) are much more involved in their communities. The ports systems are great. My favorite by far is pkgsrc (NetBSD project) since it can run across multiple operating systems. I actually use pkgsrc and pkgin on Mac OS Big Sur as a replacement for brew and macports. The not so great part is hardware support. NetBSD is nice because all drivers are shipped with the generic image and NetBSD does not take a performance hit when unnecessary drivers are loaded into the kernel (where as FreeBSD and Linux are according to a few NetBSD devs I talked to). So that was on the easiest hardware wise to find out if your system is supported. The main issue would be wifi and GPUs. FreeBSD for sure does not have any 802.11 ac support (Adrian Chadd is working on it slowly), NetBSD and OpenBSD have a few devices that are supported for 802.11 ac. Otherwise, have a card with b/g/n that is supported or have a wifi dongle ready that is supported. The patch I tested for the Audio netbsd dev was for my Macbook Pro. I've got a 2015 that is dual booted with Big Sur (mostly for school stuff) and NetBSD. I dont fully understand the audio workings, but NetBSD was defaulting to channel 4 when it should have defaulted to channel 2. So the dev made a patch to check to make sure channel 2 was the default and if not, to make it so. My experience with OpenBSD has been on an old iBook G4 that I acquired last year. OpenBSD has great legacy powerpc support. It is a great system and Theo has done a great job with it since he forked from NetBSD in the early 90s. CWM is interesting and praised by users, if your not familiar, CWM is a window manager made by the OpenBSD community. And of course, OpenBSD has a legacy of great contributions like creating OpenSSH, Doas, LibreSSL, etc. I have also used it as a webserver. Their in-house HTTP client is nice and it integrates well with acme-client for automated LetsEncrypt SSL certs.
    1 point
  40. Introduction On this forum (and discord), as now and as in the past, we have talked frequently about Computer Science topics. However, we never discuss software engineering. So here, I would like to make a little change to that by making a little article series covering the topic. For those who work in the industry may not get much out of this. This article is geared towards maybe a new comer to programming who have maybe just built some small programs in python. Someone who is not receiving formal education and might not run into this on their own, or perhaps a computer science student who has not taken any classes in software engineering. What is Software Engineering? Software engineering is a field that essentially takes some computer science and adds some engineering into the mix. The term is joined by Admiral Grace Hopper, who wrote one of the first compilers. Most importantly, software engineering is about applying concepts in computer science to build software systems in an efficient way. So, one could say it is a process of building software. Now, I would like to make an distinction between the two in an educational setting. An education in computer science is going to put more of a focus on the theory. During a computer science education, topics such as artificial intelligence and compiler theory are going to be important parts of the program. An education in software engineering is going to give some computer science education, but combine it with an engineering approach in building systems. It is important to note that a lot of computer science programs will offer courses in software engineering. Essentially it comes down to where the focus of the program is. To tackle the distinction in the work place, there is not a major one. Both educational paths can lead one to a job at google. The difference might be, a computer science student might walk out of school already know a bit about artificial intelligence. A software engineering student may have to walk out of school and to some self-driven learning. Which is okay, the biggest corner stone of an engineering degree is learning to learn in a fast and efficient manner. Why is design important? Building small programs that do one or two tasks are simple. An example can be making script in python that automates a hand full of tasks or a basic FTP server. However, when making large systems, the game changes a lot. There is a difference in building a solitaire game in comparison to a point of sale system. There is also a higher degree of risks involved. A solitaire game that crashes or has bugs may not have any serve consequences except that the game ends. A point of sale system crashing can cause great harm to a business, leak information, etc. We mitigate this by having design and a process to go with it. Instead of planning and implementing a system in our heads, using tools to express the idea, communicate about it, and evaluate it. It also helps to maintain an already existing system. Maintaining a program that consists of 100 lines is easier to maintain without these tools than a program with 100000 lines of code and a few files. Some things to keep in mind. The purpose of a design process and engineering in software is to eliminate complexity. The larger a program grows, the more complex it will become. We will not eliminate complexity, but we can reduce it. Reducing complexity also reduces the cognitive load on us as developer/programmers/engineers. Would it be easier for me to hand you the source to a software system and you read through it to understand what it is, what it does, and how it does what it does? Or would it be easier for me to give you design documents that give an higher level of abstraction of the parts and peices? If you think the former, perhaps during this series I can convince you otherwise. Design is also important because it will give a quicker way to spot problems with a software system and potentially fix an issue earlier rather than later. There are really interesting statistics on this topic that are shocking. A good place to look for this is in the famous Uncle Bob's (Robert C Martin) books Code Complete and Clean Architecture. When working in the industry, a faulty implementation based on a poor (or no) design can easily start racking up into millions of dollars for a company to fix. Really important to think about with the last paragraph, so much that I want it isolated is this, programs built around good design are easier to maintain. When making software, our standard of quality should not be "just good enough" or "it compiles." We should design while being forward thinking. Right now it compiles, but when you want to add a new feature, how hard is it going to be to implement? We can make programs modular and way easier to make changes to down the road by putting in time upfront with design. There are also suprising stats on how, one may think jumping to code is the fastest way to get done, but this is not really true. By investing time in design and actual engineering of a software system, you will tend to get to the final implementation sooner. A big reason is, having little to no design means that in the future, you will probably have more bugs and design flaws to work out in the testing stages of the software life cycle. By having design, you can code faster since you have something to code against, and you will eliminate time in the testing phase by having to go back and fix fewer design problems. Prerequisite knowledge There is some prereq knowledge that I will expect you to already have. If you do not, topics and ideas may be harder to grasp in future articles. I would advise for you to learn these topics to at least a novice level. They are: 1. OOP Programming language and concepts (I will be using Java as my reference) 2. Set theory 3. Some knowledge of UML (I will try to explain everything needed as best as possible, but a little extra knowledge other than just what I will be writing about would be a good thing). Step One: Requirements The first step in designing a software system is knowing what the requirements are. This can be an idea that a customer has for you of a software system they need. An example could be an inventory management system, a social network platform, a chat service, etc. Or the requirements can be an idea you have of a peice of software you want to develop as perhaps an open sourced project. If it is the former, a customer will probably give a document that described the system or tell you about it (take notes). If it is the later, pretend that you are writting it for a system you want someone else to build for you. For this article series, I will use as an example a point of sale system for a car dealership. Below is the requirements document. Using the requirement documents, we can extract information that will tell us key aspects of what the system needs. The first thing to do is look over the document. Next is to read the document over again and extract out noun and noun phrases. Here are some nouns and noun phrases that I extracted from my example document. Appointment Customer Salesperson CustomerServiceRep Car Engine Dealership Address Focus Camry You may be able to spot some more, but this is just a short listing to give the idea. From these nouns, these are going to represent classes we know our system will have. Next thing to do is analyze the document again, extracting verbs this time. Customize Purchase Query Inventory Feel free to look for more, these are just a few to give a general idea. The verbs are doing to represent functionality we know our program will have to provide. The program must facilitate the customer making a purchase. The program must allow us to query the inventory of cars that are in stock. The customer will have the ability to customize a car by changing the engine. Using the nouns and verbs, a high level abstraction starts to take place around what the system will look like.
    1 point
  41. Introduction Before the compiler, there was the assembler. The invention of the assembler revolutionized early computers. Previous to the assembler, instructions were submitted to a computer physically by some action as throwing a physical switch. The pre-assembler world meant programming a computer in the machine's own native tongue, machine code. A tricky combination of countless 1s and 0s for even the simplest calculations. While, most challenges today do not require the use of assembly, it is still vital to learn. Through my learning, I have learned a deeper understanding of what is going on at the lower level. It is refining the process of thinking through solving challenges at a higher language and giving concrete ideas to what an array, stack, or other data structures look like in the CPU's world. Tool The tool used is one that my university uses. It is called PLPTool. It is an IDE and simulation environment for assembly based on the MIPS Instruction set. It does not support the full MIPS instruction set, but 27 instructions that are most vital in MIPS. It is an educational tool. PLPTool is written in Java, so to run you will need a jvm/jre to run. From the site, they have executables for linux, mac, and windows. However, beaware that if your running java that is greater than version 8, it will not recognize this. If this is the case, just download the jar and run it from the command prompt/terminal. java -jar <JAR File> Link to PLPtool site: http://progressive-learning-platform.github.io/home.html PLPtool comes with a lot of ways to visualize and simulate actions. There is a tool built in to view the registry file, givings a look at values held within registers. There is an LED tool to see LEDs that are lit up, and switches to use switches to interact with the program. These are just a few examples. Below are two images. Registers Registers can be thought of as just storage locations. PLPTool has 9 temporary registers that we will use for an introduction to assembly with this tool. These registers are $t0-$t9. Registers in PLPTool will typically hold a maximum value of 'oxfffffffff' in hexidecimal. That is 8 'f's. A converter can be used to find the decimal value. Inputs into registers can come in several different forms. For this tutorial, I will focus on 3. We can define values for registers using binary, decimal and hexadecimal. If you are unfamiliar with these three numbers systems, I would recommend reading about those systems before continuing with this tutorial. Load Immediate Instruction The first instruction to learn is called the load immediate instruction. This instruction is more of like a function of 2 more basic functions, but I will not cover that here. The load immediate instructions takes as arguments a register and a value. When it is ran, the value will be placed in the register defined. It takes two clock cycles for this instruction to complete. Think of it is as the instruction running twice. The program above shows the syntax. The command is li, followed by the register and a value. This is decimal value of 1. Accompanied by it is another image that shows what the registry file looks like after it is ran. The value of 1 is stored in register $t0. Special Memory Locations Special memory locations exists. These special locations in memory are often used to interface with I/O devices. If you are familiar with C, the locations look like a memory location output of a pointer. The addresses are in hexidecimal. For now, we will focus on the LED. In order to use the LEDs, we must first set a register to the value of the memory address. When we access the register, we will be accessing the memory address store in. Store Word Instruction In order to make the LEDs output a value, we will need to use the store word instruction. This instruction takes 2 registers, and store the value of one inside of the other. In the case of LEDs, it will copy the value from one register into the memory address store in the other register. Beginning control flow Control flow is vital to how a computer program operates. We are familiar with them in languages such as C or java via if/else statements and methods/procedures/subroutines/functions. The first instruction for controling the flow of a program to learn in MIPS is the jump instruction. Think of it as making a function that is going to return void. Also, an important note, it will not leave the jump instruction to resume it's position in the program where it left off prior to the jump. Think of it as working in one direction. Here is some psuedo code that will hopefully explain it. In the pseudo code, we have a main function and two other functions. When executing this, we will set x equal to 10. Followed by jumping to the instructions in 'add-1'. Inside of 'add-1', we will add 1 to the value stores in x. The next code that will be executed in assembly will be the code inside of the function 'sub-1'. The code does not jump back up to main and execute set y equal to 12. Labels Before we begin jumps, we must learn how to define a "function." We simple name the function then follow it with a colon. It is the same as to how I defined main, add-1, and sub-1 in the pseudo code above. Jump statement The jump statement has a simple syntax. j my_label Sample code: LED output in order As you may notice from the code, this program will loop.
    1 point
  42. As part of larger project, I found myself wanting to create a database of as many DnD monsters as possible. So I set out to write a program that would take them off of public wiki pages so that it could do all of the work for me. The wiki I used was dandwiki and specifically their monster list for 5e. The database isn't particularly useful on its own since it would be impractical to try to read these database items yourself, and like I said, it is part of a larger project. But it could still be useful if you're playing without internet and needed a reference, and it's got a lot of cool regex in it to look at and learn from. Please note that they do have a "terms and conditions for non-human visitors" page here which my program is fully compliant with. I even put a 3 second interval in between page requests so as to be less bothersome. Additionally any database created as a result of running my program will be licensed under the GNU Free Documentation License v1.3 which is available here. Now that the legal nonsense mumbo jumbo is out of the way, the program works by requesting the page for their monster list, then goes through all hyperlinks within that page and requests the pages for any of them that have "(5e_Creature)" in the URL, then uses a ton of regular expressions to find all of the relevant data. It then places that data into this configuration in a .csv file: Note that, by the nature of how this program was written and its source, many of the resulting database entries are likely to be improperly formatted or missing elements. There are simply too many entries for me to manually check if they've been properly processed, and many of the wiki entries have inconsistent formatting. That said, I've written the program to be as flexible as possible and fixed many issues while writing the program, so hopefully even inconsistencies that I'm unaware of should be properly processed. Here is the code: (the indentation messed up a tiny bit) And here is a link to the download for the database so that you don't have to run this and bother the website owners: https://megaupload.nz/D7WdZbvfn4/Monsters_rar
    1 point
  43. So Warfox and I have been working on this ever since my previous post here. Many additional features and bug fixes were added to the DnD initiative tracker, and we've added an entire new module (also for DnD) to track loot drops. We're up to a whopping 7 class files: Main.java initiativeDnD: DnD.java GUI.java Mob.java lootTableDnD: Loot.java GUI.java Popup.java As for what's changed, we now have a snazzy main menu where you can choose your module: Starting with the Initiative module, it still looks fairly similar, but now with an added toolbar at the top. You can change the text size and style under the window menu, and under File you have the option to save current encounters, and load older ones. This allows DMs to even set up encounters ahead of time, and load them up once the players reach them. Then you have the new module, which is the loot tracker which has 2 different GUIs depending on the ruleset you choose: Under the File menu, you have the option to open a .csv database which is your loot table. If you press About, then you will get this popup explaining the module: And when you use the module to get a piece of loot, you will see a popup like this: And the absolute best part of all is that if any of those fields were to be empty, such as an item that has no description or a weightless item such as a potion, then that field will be automatically omitted from the popup like so:
    1 point
  44. So for my mathematical modeling class, we had to do a final project where we created a probabilistic model in one of a few different available topics. Mine was for the "rush hour" of an office lobby when all the employees arrive in the morning. All I was given were some basic immutable conditions, and the rest was up to me! Those conditions were: The time between employee arrivals varies between 0 and 30 seconds in a probabilistic manner. There are 4 elevators for 12 floors. Elevators wait 15 seconds after a person enters them before closing doors (this resets after every entry) I'm going to walk you through my though process and my formulation of what I consider to be a pretty snazzy model! So the first question on my mind and immediately what I interpreted to be the most important issue was the probabilistic arrival time of the employees. I could easily just make people equally as likely to arrive at any time in that 30 second interval, but that's very uninspired and boring. Instead I thought for a minute and decided that I wanted to start with a parabolic distribution like this: Where "α" is a normalizing constant to make sure that the cumulative probability for the relevant domain is 1. The reason being that I felt arrivals in the real world would likely be semi-grouped. This is because of things like traffic lights, carpools, public transport, cross walk signals, etc... People are forced to wait at various checkpoints in their commutes, and then all given the green light at the same time. Therefore after any given person's arrival, it's more likely that the next person is either going to arrive very shortly, or a while afterward. I then tried to see how I could improve it. I didn't like that the probability for an arrival after 15 seconds was 0 because not only does that mean we'll never have a 15 second wait period, but we also will probably never have any that are between 10-20 seconds because the probabilities are so low. For that reason I needed a base probability constant to add to the equation. Additionally, I also wanted to experiment with the idea of a variable shift value. Instead of having the parabola constantly centered at 15 seconds, I could have its global minimum vary. It could be centered at 13 when arrivals should be more sparse, and at 17 when things are busier. This is a better model for an actual rush hour because in the middle of a rush hour it's busier than the beginning or end. So I made the following changes: This is just the graph of T(t) not of P(x). "a" is another normalizing constant that I solved for by setting T(0) to 0 so that it would accurately span the entire duration of the simulation. The shift value was 2400 on T(t) because the simulation will run for 80 minutes and that's 4800 seconds. "c2" is the maximum value of the time variable shift. For P(x) "mi" is a constant which represents the minimum shift value and "c" is the base probability value. From there I just needed to substitute the finished T(t) function into P(x), assign values to my constants, and solve for "α" and when you graph that final equation you get: Where any ZX slice of the surface represents the probability distribution for arrival interval at any given point in the simulation. Finally, I also had the idea to include an option for employees to take the stairs with roughly a flipped exponential distribution for the probabilities that any given employee will use them (e.g. 66% chance if on floor 2, 43% for floor 3, 27% for floor 4, etc...) Finally I had to write a program for a Monte Carlo simulation to actually implement the model: Main.Java Person.Java Elevator.Java Parts of the code might be slightly ham-fisted because I needed to just get it to work, but it works perfectly fine. And these were the results after running the simulation 5000 times: What's quite interesting is that you can actually see the time variable shift in these results. Only about 0.1% of people actually had to wait in line for an elevator, but the average amount of time that people spent in line if they did have to wait was 23 seconds rather than something inconsequential. This implies that the model did its job and toward the middle of the simulation a line formed, and then it dispersed when the rush hour died down.
    1 point
  45. Badass mind map for passive open source intelligence gathering. http://osintframework.com/
    1 point
  46. In preparation for the CTF series I want to work on I've been watching some videos of other people's processes. I highly encourage everyone to watch this one and appreciate its subtleties. https://youtu.be/_m_LY7JO9MM For those of you that didn't watch it (go back and watch it), the guy in the video has a vulnerable image from vulnhub.com called "pandora". He runs a portscan on the image and finds that the admin left himself a backdoor on one of the ports. after connecting to the port it prompts for a password. Mr. Hacker then tries all the common passwords (sex, secret, god, lol), and then wonders to himself: "Hmm... is this vulnerable to a timing attack". The rest of the video is of him coding a short python script to exploit the port. This was the first time I've ever heard of a timing attack, and I have knowledge of most vulnerabilities. So at first I wondered if he was either (1 a genius, or (2 had prior knowledge and wanted to look cool. Turn out though that this problem falls into an entire subclass of exploits of the genre "timing attack", and this one, in particular, is possible because of comparison functions running in O(n) time. What is a timing attack? On wikipedia that certainly sounds complex, and I'm sure in some cases it can be fairly complex. But for our purposes all you need to understand is that: Computers execute instructions to get things done Computers take time to execute those instructions Depending on how long it takes we can make assumptions Meat and Potatoes of this simple magic trick Every password comparison function at some point needs to compare the inputted string, to its saved value for the password. Here's some psuedo code for how this works: for letter1 in input_string: for letter2 in correct_stored_password: if letter1 != letter2: return False return True Notice that on the first letter that doesn't match, it goes ahead and returns False. No point in wasting time right? This is actually the problem. Don't be distracted by the for loop. this function does indeed run in O(n) time where 'n' is the length of the string. One problem: 'n' doesn't equal the length of the string, it equals the number of letters in the string that are correct because otherwise the function immediately returns. And this is how the timing attack works. While checking the password, the computer will look at the first letter, and compares it to its stored value. If it's incorrect it will display invalid password and return, but if the first letter is correct it must at least check one more letter before returning! Hence the closer your guess, the further the computer looks at you string, and the longer it should take. Note: in the video the guy actually does the opposite: he immediately starts looking for a letter that is rejected faster: characteristic of an invalid letter. He was not the first person to work on this image, and someone else had already posted their results first (you can no longer find a link). In reality, when looking for timing attacks you perform take an average of all the compute times and find an outlier. And outlier is more useful than just looking for what takes a longer or shorter amount of time because you don't know what the backdoor is written in, and you don't know how the code is optimized. What's more, this is the only way to detect if the code is doing hashing behind the scenes. In other words, this dude cheated for views and made you watch 30 mins of him being shitty at python before trying to wow you with his magic trick. Now back to the post. When Mr. Hacker decided that he would use a timing attack you could say he was using some lateral thinking. Would an administrator lazy enough to write a backdoor instead of setting up ssh actually take the time to hash the function? The answer here is no. A timing attack such as this one wouldn't have worked if the password was hashed before comparison because, in all the commonly secure hashing functions, slight changes in input have drastic effects on the output. Consider this image: Even changing one letter completely changes the resulting hash. So this makes a timing attack impossible. Conclusion: This attack should work everywhere a linear comparison function is used and is not offset by a confounding function like a hash. This includes not just passwords prompts but also guessing games, finding cheat codes etc...
    1 point
  47. Offensive Security has some of the most, if not the most, respected certifications in the industry. It differs from other certs, like CEH, in that instead of providing a knowledge that's a mile wide and an inch deep, it gives you hands-on drills and practice. Unfortunately, the program is also quite costly. If you can learn the whole thing proficiently in 30 days, you're looking at $800.00 for the OSCP alone. Ultimately, if you want the cert. You're going to have to pay. In the meantime, I want to write about how you can be acquiring the same skills now for cheap and in some cases free. And maybe at some point down the line, you won't need all 30 days to get the parchment. I'll also add that, while this is a damn decent set of certs and courses, it's not comprehensive heavily relies on you to do your own research. supplementary certs such as CCNA or L-PIC will be needed for example to elaborate on any particular concept (tis why I'm studying CCNA currently). Disclaimer: I haven't obtained any of these certs myself yet, so I'm offering general information only. Certifications: OSCP link: https://www.offensive-security.com/information-security-certifications/oscp-offensive-security-certified-professional/ Description: Learning Goals: Identify Existing Vulnerabilities Execute organized attacks Write simple Bash and/or Python scripts Modify existing exploit code to their advantage perform network pivoting perform data exfiltration compromise poorly written PHP applications. Keep going until you win. Official Training guide: Penetration Testing with Kali Linux (PWK) Talking Point: The first bullet in the learning goals is very important to note. By the end of the course you're not going to be writing custom exploits. The entire point of it is to get good at using what's already available in an efficient and creative way. You won't be developing custom exploits because that isn't the point. The point is to practice using the cyber killchain, with success, until it's burned into your brain. You won't learn until you get that positive feedback - the shot of dopamine from actually compromising a box. Focusing on the details at this level would only slow down your learning. So at this level just use other people's exploits and tools. Substitutes/freebies: Training guide + video series (2014): https://thepiratebay.org/torrent/20152226/Offensive-Security__PWK__Penetration_Testing_with_Kali free Practice labs (vmware): https://www.vulnhub.com/ more labs specifically concentrating on learning Linux system hierarchy and common commands: http://overthewire.org What I can't find is a viable substitute for simulating an actual network. And you're not always going to be testing from the same subnet as the target. OSCE link: https://www.offensive-security.com/information-security-certifications/osce-offensive-security-certified-expert/ Description: Learning Goals: Obtain shell from basic web application attacks such as xss and directory traversal. Modifying executable files with custom shellcode on windows. Avoid AV dealing with ASLR Finding possible 0days using fuzzing techniques then developing an exploit. Official Training guide: Cracking the Perimeter (CTP) Talking Point: The actual course description on this isn't too indicative of what you learn in the course. So I did my best to extract the learning goals from a 2012 CTP manual. This course gets quite a bit more advanced and actually relies way more on the individuals' ability to do their own research. That said I'm actually unconvinced of this courses usefulness outside of its case studies. The shellcoders handbook is a thousand page tome that elaborates way more on all of these topics. We're actually getting into the hard security research/computer science realm with this. for labs in this course I'd recommend finding exploits on exploit-db or packetstorm and try setting up a debug environment yourself, fuzz (this is the analog of recon at this level), and write your own version of the exploit. Compare your code to the POC code. Substitutes/freebies: OLD training guide(2012): https://thepiratebay.org/torrent/7483548/Offensive_Security_-_BackTrack_to_the_Max_Cracking_the_Perimeter Shellcoders handbook: http://index-of.es/Varios/Wiley.The.Shellcoders.Handbook.2nd.Edition.Aug.2007.ISBN.047008023X.pdf OSWE link: https://www.offensive-security.com/information-security-certifications/oswe-offensive-security-web-expert/ Description: Learning Goals: fingerprint web applications identify vulnerabilities found exploit vulnerabilities write a report about it Official Training guide: Advanced Web Attacks and Exploitation (AWAE) Talking Point: This is the OSCP equivalent to web applications. Not much in the way of crafting your own exploits. Fortunately, new exploits found in web applications tend to be rehashes of other common vulnerabilities, so with webdev experience, it starts to become intuitive anyway. Labs to find for web app attacks are everywhere. so this is the easiest to learn the basics of. Substitutes/freebies: Web application hacker's handbook (Huge comprehensive tome): https://leaksource.files.wordpress.com/2014/08/the-web-application-hackers-handbook.pdf lab (courtesy of mls577): https://www.owasp.org/index.php/OWASP_Mutillidae_2_Project lab: hackthissite.com OSWP link: https://www.offensive-security.com/information-security-certifications/oswp-offensive-security-wireless-professional/ Description: Learning Goals: Conduct wireless information gathering. Circumvent wireless network access restrictions. Crack various WEP, WPA, and WPA2 implementations. Implement transparent man-in-the-middle attacks. Demonstrate their ability to perform under pressure. Official Training guide: Offensive Security Wireless Attacks (WiFu) Talking Point: This is a narrow topic that only covers wi-fi (no discussions of bluetooth for example). Substitutes/freebies: Course manual (2012): https://thepiratebay.org/torrent/20152240/Offensive-Security_-_OSWP_-_WiFu OSEE link: https://www.offensive-security.com/information-security-certifications/osee-offensive-security-exploitation-expert/ Description: Learning Goals: reverse engineering assembly/disassembly Develop sophisticated exploits. Create custom shellcode. Evade DEP and ASLR protections. Exploit Windows kernel drivers. Perform precision heap sprays. Official Training guide: Advanced Windows Exploitation (AWE) Talking Point: this course is only available live by attending blackhat here in Vegas. Need an expert to make a recommendation for this one. Substitutes/freebies: OLD course manual (2012. this is so freakin dated) https://thepiratebay.org/torrent/7835702/Offensive_Security_-_Advanced_Windows_Exploitation_(AWE)_v_1.1 A free course from Offensive security: Metasploit Unleashed. https://www.offensive-security.com/metasploit-unleashed/
    1 point
  48. ExploitDB, Offensive Security's Exploit Database Archive is an amazing resource. Be it for google dorks, exploits, shellcode, or technical papers. https://www.exploit-db.com/ Want to be able to search for exploits offline, or via terminal? Check out the following, a few simple commands will arm you with the entire DB! https://www.exploit-db.com/searchsploit/
    1 point
  49. Here's a BASH ping sweep program I wrote for my systems programming class. Its use case is very narrow: You have a bash shell on a remote box but no access to a better recon tool (like NMAP). Why not just send nmap over the wire and use it? Because you may be in a position where you can't chmod+x nmap after you do so. To be fair you can't chmod+x this script either but you can, with modification, feed it directly into your shell no chmod required. I'm posting it in POC form for ease of analysis. #!/bin/bash function ip_to_decimal() { local dec_ip=0 for ((a=4, b=1; b < 5 ; a--, b++)) do let dec_ip+=$((`echo $1 | cut -d "." -f $b`<<$((8 * ($a - 1))))) done echo $dec_ip } #ip_to_decimal 192.168.56.101 function decimal_to_ip() { local ip for ((a=3, b=0; b < 4 ; a--, b++)) do ip+=$(( ($1 & (0xff000000 >> (8 * $b))) >> (8 * $a) )) if [ "$b" -ne 3 ] then ip+=. fi done echo $ip } #decimal_to_ip 3232249957 function increment_ip_address() { local dec_ip=`ip_to_decimal $1` let dec_ip+=1 local inc_ip=`decimal_to_ip $dec_ip` echo $inc_ip } #increment_ip_address 192.168.56.101 function ips_in_subnet() { local a=`ip_to_decimal $1` a=$(( ((~ $a) & 0xffffff) - 1)) echo "$a" } #ips_in_subnet 255.255.255.0 if [ "$#" -ne 2 ] then printf "Usage:\n\t%s <network-address> <subnet-mask>\n\n" $0 printf "\tExamples:\n" printf "\t\t%s 192.168.56.0 255.255.255.0\n" $0 printf "\t\t%s 192.168.56.0 255.255.255.128\n" $0 printf "\t\t%s 192.168.56.128 255.255.255.128\n" $0 printf "\n" exit fi number_ips=`ips_in_subnet $2` ip_address=$1 for n in `seq 1 $number_ips` do ip_address=`increment_ip_address $ip_address` (ping $ip_address -c 1 -W 1 | grep from | cut -d " " -f 4 | cut -d ":" -f 1 & ) 2> /dev/null done
    1 point
×
×
  • Create New...