Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 07/09/18 in Posts

  1. Can-bus is a really simple communication protocol originally made for cars, but these days they are used for anything, even in subsea christmas trees. For starting with CAN the wiki page is surprisingly good and is a nice starting point along with this url:https://opensource.lely.com/canopen/docs/cmd-tutorial/ Anyways, so I will share two simple codebases, be warned! The code is shitty and both together was coded in less than a week which is why its a uncommented mess (literally made with a knife on my throat as a project saving kung-fu in an EU project). The code is without any lisence, but sadly I cannot show you the actual usage of the code as its properitary but its farily simple so I will just inline it here: canbus_communicator = new CanThread("vcan0"); paxterGen3Tpdo = new PaxterGen3Tpdo(); canbus_communicator->addNode((CanMethodResolver *) paxterGen3Tpdo); canbus_communicator->start(); The C version is very hacky, the first constraint was to write the software in C which is nice as I like C, but I havent programmed in it in a couple of years but it uses the deadly sin of "OOP function pointers", which can be hacky when distributing multiple signals in parallel. So we start with defining a simple canbus reader (implementation in the .c file): enum { INVALID_LENGTH_ARGUMENT = -1 }; struct canbus_reader { int canbus_socket; char *ifname; int(*read_frame)(struct canbus_reader *, int *, char [8], unsigned *); int(*write_frame)(struct canbus_reader, int, const char *, unsigned); }; typedef struct canbus_reader canbus_reader_t; canbus_reader_t *canbus_reader_create(char *ifname, bool block); void canbus_reader_destroy(canbus_reader_t *reader); So far, pretty clean, the pointers here are just for doing reading and writing contained within the namespace. To ease the parallelization processes we thus wrap this into a canbus thread with the following api: struct canbus_thread; typedef struct canbus_thread canbus_thread_t; typedef int (*frame_handler_func)(int, char*, unsigned); enum { MAXIMUM_AMOUNTS_OF_METHODS_PER_THREAD = 1 << 4 }; //__BEGIN_API /** * Creates a handle for a canbus thread * * @param ifname The network interface name to listen to, preferably a can interface * @return A new canbus thread wrapper */ canbus_thread_t *canbus_thread_create(char *ifname); /** * * The canbus thread can handle a frame in multiple ways depending on how the different listeners requires the data * @param canbus_reader The reader itself * @param func_pointer A function pointer which parses the processed can data on the format (id, data, len) * @return 0 if successful else -1 */ int add_method_to_canbus_thread_handler(canbus_thread_t *canbus_reader, frame_handler_func func); int start_thread(canbus_thread_t *thread); // THIS SHOULD PROBABLY BE REFACTORED TO THREAD STRUCT FOR OOPness :D, note: this is retarded void canbus_thread_destroy(canbus_thread_t *canbusThread); //__END_API_ Still seems... kinda clean, but also shit. Whatever, it was hastely pulled together. So we inspect this retarded programmers C file to see the struct, because surely, they know how to program C in an embedded environment...right? The thread wrapper has the following struct: struct canbus_thread { canbus_reader_t *reader; bool isRunning; pthread_t _thread; int num_methods; frame_handler_func frame_handler_functions[MAXIMUM_AMOUNTS_OF_METHODS_PER_THREAD]; }; WTF, no one would be stupid enough to have an array of function handles in order to reduce the code to work like in a modern OOP env in C? Well, sorry to say that I am that retard. So doing simple things such as creating a running a thread turns into this abomination: void *run_can_thread(void *arg) { int id; unsigned len; char data[8]; canbus_thread_t *canbus_thread = (canbus_thread_t *) arg; DLOG(INFO, "[%s] Thread func start \n", (canbus_thread->reader->ifname)); while (canbus_thread->isRunning) { if (canbus_thread->reader->read_frame(canbus_thread->reader, &id, data, &len) > 0) { for (int i = 0; i < MAXIMUM_AMOUNTS_OF_METHODS_PER_THREAD; i++) { if ((*canbus_thread->frame_handler_functions[i]) != NULL) { fprintf(stdout, "I am thread %s calling the func now!\n", canbus_thread->reader->ifname); (*canbus_thread->frame_handler_functions[i])(id, data, len); } } } } DLOG(INFO, "[%s] Thread func stop \n", (canbus_thread->reader->ifname)); return NULL; } With the implementation of sensors as you see in the C repository we got the message that we could use C++. This wa actually one of my first time using C++, but given it was an embedded env it was basically just really nice C. Which means we solve the above things with simple classes like: class CanMethodResolver { public: virtual int handle_frame(int id, char *data, unsigned len) = 0; }; Which allows you to define in interface with an external component (like NodeJ1939 in a car) as following: NodeJ1939::NodeJ1939() { msgCount1 = 0; msgCount2 = 0; msg3State = false; } int NodeJ1939::handle_frame(int id, char *data, unsigned len) { if ((id & CAN_EFF_MASK) == ID.MESSAGE1) { return appendMessage1(data, len); } else if ((id & CAN_EFF_MASK) == ID.MESSAGE2) { return appendMessage2(data, len); } else if ((id & CAN_EFF_MASK) == ID.MESSAGE3) { if (!msg3State) { msg3State = true; return appendMessage30(data, len); } else { msg3State = false; return appendMessage31(data, len); } } return 0; } int NodeJ1939::appendMessage1(char *data, unsigned len) { maxVolt = ((float) ((data[0] << 8) | data[1])) / 10; maxCurr = ((float) ((data[2] << 8) | data[3])) / 10; charging = !data[4]; msgCount1++; return 0; } int NodeJ1939::appendMessage2(char *data, unsigned len) { volt = ((float) ((data[0] << 8) | data[1])) / 10; curr = ((float) ((data[2] << 8) | data[3])) / 10; hwFail = (data[4] & 0x1); tempFail = (data[4] & 0x2); voltFail = (data[4] & 0x4); comFail = (data[4] & 0x10); msgCount2++; return 0; } int NodeJ1939::appendMessage30(char *data, unsigned len) { nomAhr = ((float) ((data[0] << 8) | data[1])) / 10; storedAhr = ((float) ((data[2] << 8) | data[3])) / 10; actualCurr = ((float) (((data[4] & 0x7f) << 8) | data[5])) / 10; actualPackVolt = ((float) ((data[6] << 8) | data[7])) / 10; soc = 100 * (storedAhr) / (nomAhr); return 0; } int NodeJ1939::appendMessage31(char *data, unsigned len) { maxCellVolt = ((float) ((data[0] << 8) | data[1])) / 1000; minCellVolt = ((float) ((data[2] << 8) | data[3])) / 1000; maxCellTemp = ((float) (((data[4] << 8) | data[5]) - 200)) / 10; minCellTemp = ((float) (((data[4] << 8) | data[5]) - 200)) / 10; return 0; } int NodeJ1939::appendMessage1X(char *data, unsigned len) { return 0; } By simple inheritence. class NodeJ1939 : CanMethodResolver { public: NodeJ1939(); int handle_frame(int id, char * data, unsigned len); struct ID{ static const int MESSAGE1 = 0x1806E5F4; static const int MESSAGE2 = 0x18FF50E5; static const int MESSAGE3 = 0x18075000; static const int MESSAGE1X = 0x1806E6F4; } ID; .................omitted. I will upload both the C and C++ repositories once I find a decent way of sharing with the members of HAXME without exposing it completlly
    4 points
  2. Do you have that .cap file you got by deauthing your asshole neighbor that you just cannot seem to crack even when using GPU accelerated cracking? Yea, me neither, I totally would NEVER do that, because it's illegal. That said, instead of trying to crack that WPA/WPA2 (or greater) (if your having this issue with WEP, then you have more problems that I cannot help you with) why not just bypass it? This tool is pretty dated but it's still badass. There are other great tools that have evolved since it's inception like Reaver, and other tools that hack the WPS pin, instead of attacking the actual password, but I like this one the best. Kevin Mitnick, said that the weakest link in security is almost always the human factor, and for any of you who have actually been on a hack, or pentesting op, that's pretty fucking true. This goal can be accomplished with no overhead (like if using a Wifi Pineapple, from Hak5 [which btw is completely worth the money!]). Check out this page. Here is a snippet from said page: About Wifiphisher is a rogue Access Point framework for conducting red team engagements or Wi-Fi security testing. Using Wifiphisher, penetration testers can easily achieve a man-in-the-middle position against wireless clients by performing targeted Wi-Fi association attacks. Wifiphisher can be further used to mount victim-customized web phishing attacks against the connected clients in order to capture credentials (e.g. from third party login pages or WPA/WPA2 Pre-Shared Keys) or infect the victim stations with malwares. Wifiphisher is... ...powerful. Wifiphisher can run for hours inside a Raspberry Pi device executing all modern Wi-Fi association techniques (including "Evil Twin", "KARMA" and "Known Beacons"). ...flexible. Supports dozens of arguments and comes with a set of community-driven phishing templates for different deployment scenarios. ...modular. Users can write simple or complicated modules in Python to expand the functionality of the tool or create custom phishing scenarios in order to conduct specific target-oriented attacks. ...easy to use. Advanced users can utilize the rich set of features that Wifiphisher offers but beginners may start out as simply as "./bin/wifiphisher". The interactive Textual User Interface guides the tester through the build process of the attack. ...the result of an extensive research. Attacks like "Known Beacons" and "Lure10" as well as state-of-the-art phishing techniques, were disclosed by our developers, and Wifiphisher was the first tool to incorporate them. ...supported by an awesome community of developers and users. ...free. Wifiphisher is available for free download, and also comes with full source code that you may study, change, or distribute under the terms of the GPLv3 license. [Click and drag to move]
    3 points
  3. @AK-33 Sick build! I love how you totally have a case, but do not have a case. That design is awesome. Do you ever feel like it doesn't have enough protection? @cwade12c LOVE THE RGB... I am an RGB g00n myself (see down in the build). Not going to lie, I am super duper jelly of your 4 monitors, I currently only have one and need to at least get 2, you have 4. Love it. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- THIS IS MY FIRST TRUE BUILD -- THAT I DID ENTIRELY BY MYSELF I use this as my daily driver, for gaming and making YouTube videos. It's not super specked out in terms of CPU or GPU or anything like that, but to me, it's a very respectable unit that I've been dreaming of since I was a little kid. If you click on the video creator, you might find dozens of videos on the channel ;) In case you're interested in all the parts and how much they cost, the rig can be seen below: or you could find it yourself on PC parts picker: https://pcpartpicker.com/list/NBbVj2 You'll find that on PC part picker, it says there is some problems with the build. I'm not using an older version of the BIOS, I'm even using one better than 2203 "One SATA port is disabled" -- Ok, I got 5 others bro Yes, I actually had to carve out some of my fans and water cooler in order to get everything to fit.. so this was a valid error I guess XD I also have some more stuff in my "build" that PC parts picker doesn't have... more of the "cool streaming stuff" --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Thanks for taking a p33k at my build PL0X.
    3 points
  4. Awesome rig, @AK-33! The water cooling looks SICK! What are the specs? I love your family of laptops, @WarFox. Which laptop from the family is your favorite, and why? Also, good looks on the Run BSD stickers - I will consider requesting some if I run BSD in the future. Here's a 30 second video of my setup. The tower is not at all impressive, so I didn't show it off. I didn't do any fancy chassis or lights on my rig this round. Specs: Operating System Windows 10 Pro 64-bit (Dual Boot) Debian 64-bit (Dual Boot) CPU Intel Core i9 @ 3.60GHz Kaby Lake 14nm Technology RAM 32.0GB Motherboard Dell Inc. 0H0P0M (U3E1) Graphics LG ULTRAWIDE (2560x1080@60Hz) LG ULTRAWIDE (2560x1080@60Hz) HP VH240a (1080x1920@60Hz) HP VH240a (1080x1920@60Hz) Intel UHD Graphics 630 (Dell) 4095MB NVIDIA GeForce GTX 1070 (Dell) Storage 476GB KXG60ZNV512G NVMe TOSHIBA 512GB (SSD) 931GB Seagate ST1000DM010-2EP102 (SATA ) 931GB Western Digital WD My Passport 0820 USB Device (USB (SATA) ) 5589GB Western Digital WD My Book 25EE USB Device (USB (SATA) ) 930GB Western Digital WD My Book 1110 USB Device (USB (SATA) ) 4657GB Western Digital WD Game Drive USB Device (USB (SATA) (SSD))
    3 points
  5. My supervisor for my thesis told me about this site last year and it's one of the most valuable resources available that I know of. https://arxiv.org/ is a pre-print site where scientists upload their papers before they have been peer reviewed and published and it currently has over 1.9 million papers. This means that the papers on arXiv are often the same papers being published in reputable journals but they are not behind a paywall. These are pre-prints and have not been peer reviewed yet, but you can still read through them and analyze their methodology for yourself. I used a few papers from arXiv for my thesis on quantum resistant encryption algorithms.
    3 points
  6. Intro In the previous post, we looked at the scope of the series and the tools that will be required. In this post, we are going to be covering the most important piece of authoring Blu-rays: specifications. You can mux any video and audio input into a container file, burn any video and audio streams to a disc, encode any source to an output of your choosing and call it "HD" or Blu-ray compliant. That does not make it so. There are specifications that must be followed in order for your content to be deemed Blu-ray compliant. Compliance is important because if the media you author is Blu-ray compliant, you can be sure that it will work on any Blu-ray player. Specifications In order for your media to be considered Blu-ray compliant, the following rules must be followed. We are only going to concern ourselves with the Blu-ray spec at this time, which will exclude Ultra HD Blu-ray and Blu-ray 3D. Video Codecs: MPEG2 - Main Profile at High Level (MP@HL) or Main Profile at Main Level (MP@ML) h264 (AVC) - High Profile at 4.1/4.0 Level (HP@4.1/4.0) or Main Profile at 4.1/4.0/.3.2/3.1/3.0 Level (MP@4.1/4.0/3.2/3.1/3.0) h265 - High Profile at 4.1/4.0 Level (HP@4.1/4.0) or Main Profile at 4.1/4.0/.3.2/3.1/3.0 Level (MP@4.1/4.0/3.2/3.1/3.0) VC1 - Advanced Profile at Level-3 (AP@L3) or Advanced Profile at Level-2 (AP@L2) Video Frame Size: 1920×1080 29.97 frames interlaced / 59.94 fields (16:9) 1920×1080 25 frames interlaced / 50 fields (16:9) 1920×1080 24 frames progressive (16:9) 1920×1080 23.976 frames progressive (16:9) 1440×1080 29.976 frames interlaced / 59.94 fields (16:9) 1440×1080 25 frames interlaced / 50 fields (16:9) 1440×1080 24 frames progressive (16:9) 1440×1080 23.976 frames progressive (16:9) 1280×720 59.94 frames progressive (16:9) 1280×720 50 frames progressive (16:9) 1280×720 24 frames progressive (16:9) 1280×720 23.976 frames progressive (16:9) 720×480 29.97 frames interlaced / 59.94 fields (4:3/16:9) 720×576 25 frames interlaced / 50 fields (4:3/16:9) Audio Codecs: Dolby Digital (up to 5.1 channels with a maximum bitrate of 640 Kbit/s) Dolby Digital Plus (up to 7.1 channels with a maximum bitrate of 4.736 Mbit/s) Dolby Lossless (up to 9 channels with a maximum bitrate of 18.64 Mbit/s) DTS (up to 5.1 channels with a maximum bitrate of 1.5244 Mbit/s) DTS HD (up to 9 channels with a maximum bitrate of 24.5 Mbit/s) Linear PCM (up to 9 channels with a maximum bitrate of 27.648 Mbit/s) Subtitles Image bitmap subtitles (.SUP) Text subtitles (.SRT) Maximum Video Bitrate 40 Mbit/s Maximum Total Bitrate 48 Mbit/s Maximum Data Transfer Rate 54 Mbit/s I highly recommend reviewing the following resources to learn more about Blu-ray specifications and structure: http://www.hughsnews.ca/faqs/authoritative-blu-ray-disc-bd-faq/4-physical-logical-and-application-specifications https://www.videohelp.com/hd https://forum.doom9.org/showthread.php?t=154533 VideoHelp and doom9 will be your best friends. Use those resources. Background I can just toss the Blu-ray specs out there, but understanding is also important. We can blindly click on things, blindly pass arguments...or, make informed actions. Let's talk a little bit about H.264 AVC. You can think of H.264 as a family of profiles. Each profile has different rules relating to the encoding techniques and algorithms used to compress files. The Baseline profile is the primary profile used for mobile applications, video conferencing, and low powered devices. It benefits from achieving great compression ratios and other techniques like chrominance sampling and entropy coding techniques. The Main profile is the primary profile used for standard definition television broadcasts. It benefits from all of the Baseline profile enhancements, in addition to improved frame prediction algorithms. The High profile is the primary profile used for disc storage and high definition television broadcasts. It benefits from achieving the best compression ratios and using transformation techniques to reduce network bandwidth requirements by up to 50%. Profiles are proportional to the level of complexity required to encode/decode. Thus, higher complexity profiles require more CPU power. Levels are another type of configuration to set constraints on the encoder/decoder. The levels are a reflection of history, with H.264 evolving and growing as a standard. While profiles define rules for encoding techniques, levels place maximums on: Maximum decoding speed (Macroblocks/s) Maximum frame size (Macroblocks) Maximum video bitrate (Kbit/s) There are currently 20 levels, with the lowest level being Level 1 and the highest being Level 6.2. Level 1 defines constraints of: Maximum decoding speed of 1,485 Macroblocks/s Maximum frame size of 99 Macroblocks Maximum video bitrate of 64 Kbit/s Level 6.2 defines constraints of: Maximum decoding speed of 16,711,680 Macroblocks/s Maximum frame size of 139,264 Macroblocks Maximum video bitrate of 800,000 Kbit/s Thus, you arrive at resolutions ranging from 128x96 (Level 1) through 8,192x4,320 (Level 6.2). Now, when we look back at the Blu-ray specifications, you can use your knowledge of H.264 profiles and levels to choose appropriate encoding techniques and constraints that fall within the spec. Viewing Media Specifications with MediaInfo As you might imagine, it is important to always know the specifications of your audio and video. Therefore, having some sort of tool that can quickly show you this information in a presentable manner is an essential tool. There are quite a few tools for this, but one of the most popular ones that I like is MediaInfo. It is free open-source software that is simple to use. Download and install MediaInfo. Set your View. By default it is Basic. I really like Tree. Open a video or set of videos under File, and that's it! As we can see in this example, the media file I selected uses AVC and was encoded using x264. Things like the frame rate (23.976 Frames/s constant), Bitrate (2,741 Kb/s), resolution (720P), and encoding settings are quickly available. Here are the encoding settings that were used for this file: cabac=1 / ref=16 / deblock=1:0:0 / analyse=0x3:0x133 / me=umh / subme=10 / psy=1 / psy_rd=1.00:0.00 / mixed_ref=1 / me_range=32 / chroma_me=1 / trellis=2 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=0 / chroma_qp_offset=-2 / threads=8 / lookahead_threads=2 / sliced_threads=0 / nr=0 / decimate=0 / interlaced=0 / bluray_compat=0 / constrained_intra=0 / bframes=16 / b_pyramid=2 / b_adapt=2 / b_bias=0 / direct=3 / weightb=1 / open_gop=0 / weightp=2 / keyint=288 / keyint_min=23 / scenecut=40 / intra_refresh=0 / rc_lookahead=60 / rc=crf / mbtree=1 / crf=14.0 / qcomp=0.60 / qpmin=0 / qpmax=81 / qpstep=4 / ip_ratio=1.40 / aq=3:1.00 In the next tutorial, we will look at ripping from physical media, battling DRM, and destroying senseless region locks.
    2 points
  7. Okay, people ... show 'em if you got 'em! Meet the love of my life: Her name is Scimitar. Shoutout to the good people at Overkill Computers for building her for me!
    2 points
  8. Just for you guys, I reconfigured my PC to turn on the RGB. Its placed under a desk in the 'office' (at home), next to where my wife now also sits while she finishes her PHD. So the whole thing is pretty much hidden. I use 4 monitors with a KVM so that we can switch between the machines, otherwise we have 2 screens each, when we both are working. Specs: Intel Core i9-10900K ASUS ROG STRIX Z490-F GAMING Nvidia RTX 3090 TUF gaming OC Samsung 970 Evo Plus NVMe PCIe M.2 1TB Kingston SKC2500M81000G 1TB Seagate FireCuda SSHD 2TB (2016) Seagate Barracuda 2TB (2018) Kingston HyperX DDR4 3200 C18 4x16GB
    2 points
  9. There are some pretty badass resources out there for Shodan. A good place to start to really see some of the crazy shit you can do with it, and as well as to avoid a visit from the Department of Homeland Security, can be located here: This is a badass talk. Dan is a kick-ass Defcon speaker. Also, this quick guide will introduce you to shodan: https://www.hackeracademy.org/hacking-with-shodan-how-to-use-shodan-guide/ Here are some cool pentensting related projects, that use Shodan: https://awesomeopensource.com/projects/shodan
    2 points
  10. Just wanted to gather the opinions with others and also put out some of my thought. It seems like the big contenders in this field are Rust, Zig and D. I think also Nim is targeting the system space programming of C along with V lang. Of all of the languages, I personally like the syntax of D and the meta programming concepts. Pretty much looks like what C++ should have been. I also like how memory safety in the compiler is not default and instead has to be specified when to use it and when to not. Might help cut down on compile times. A function that does a simple calculation like maybe calculating an interest rate might only be using stack variables and nothing allocated on the heap, so it doesn't really need the memory safety features wasting time on it, but adding a node to a list might. I have dabbled some in Rust. Honestly, I don't like it. The syntax just seems a little overly complicated and I feel like a lot of words in the ecosystem are not in fact new concepts, but instead renaming concepts already present in computer science. One thing I do like about rust, the compiler is verbose which always helps with troubleshooting/debugging. I do also like that is catches when branches of execution are not being handled such as exception handling. Zig has gotten some buzz in the BSD community but I see little else mentioned about it elsewhere. However, it is not at a 1.0 release yet, so that could be a reason why. Overall, I don't think these languages will fully replace C. It is just so easy to port and get stuff bootstrapped. Not to mention the time and resources needed to re-implement something like the Linux kernel in 100% Rust or another language would take forever. I see the C language being timeless and always having a use case. Maybe it will lessen some with the like of Rust, D and Zig starting to come up, but we probably won't have a day in my lifetime where C code isn't at play somewhere.
    2 points
  11. We covered some of this in my Secure Software Engineering class. Lots of great info and lots of great tools out there. NIST is pretty awesome. SEI is also pretty amazing for looking up things dealing with code. For those unfamiliar, SEI has documentation for each language on common unsecure code snippets, why it is unsecure and better ways to write the code while achieving the same result. SEI for C as an example: https://wiki.sei.cmu.edu/confluence/display/c
    2 points
  12. HCL AppScan CodeSweep will try to detect vulnerabilities within your code each time you save your code. It comes as a VSCode extension or as a Github Action, so that it will scan code upon a pull request. It supports scanning files of the following types: Android-Java Angular Apex ASP.Net C C# Cobol ColdFusion Golang Groovy Infrastructure as Code Ionic JavaScript JQuery Kotlin MooTools NodeJS Objective-C Perl PHP PL/SQL Python React React Native Ruby Scala Swift T-SQL TypeScript VB.Net VueJS Xamarin VSCode Extension: https://marketplace.visualstudio.com/items?itemName=HCLTechnologies.hclappscancodesweep Github Action: https://github.com/marketplace/actions/hcl-appscan-codesweep
    2 points
  13. Introduction Hi all! I wanted to take some time to put together a comprehensive privacy guide with the goal of offering viable solutions to common services and software that are privacy oriented. When determining my recommendations and suggestions, I am mostly utilizing the following criteria: Follows the GNU four freedoms Services not based in mandatory key disclosure jurisdictions Audited or transparent Motivation "That's great Wade, but I don't have anything to hide." This is a fallacy I would like to disrupt. Voluntarily giving information away is perfectly reasonable, so long as one understands the costs/benefits and risks. There are security considerations that many people fail to realize when they suggest that privacy is not important. Humans can be the greatest vulnerability and easiest way to gain unauthorized access to a system; simply knowing information, especially that people voluntarily provide or publicly make available, can be valuable in the information gathering phases of an attack. An attacker can use this information to social engineer you or people related to you, causing potential financial damage to you or those around you. Some in the intelligence community suggest that reducing privacy is a necessary cost for increasing security. I look at this differently. Strong privacy goes hand-in-hand with security. I will attempt to demonstrate this in a related thread, Twenty+ Reasons Why Mass Surveillance is Dangerous. In the meantime, you are welcome to view my original publication on Packet Storm Security titled, Twenty Reasons Why Mass Surveillance is Dangerous. Additional resources I'd like to recommend on why privacy is important, to support my motiviation: The Value of Privacy by Bruce Schneier When Did You First Realize the Importance of Privacy? by EFF The Little Book of Privacy by Mozilla Table of Contents ---- Providers -------- Cloud Hosting -------- DNS ------------ Resolvers ------------ Clients -------- Email ------------ Hosts ------------ Clients -------- Image Hosting -------- News Aggregation -------- Search Engines -------- Social Networks -------- Text Hosting (Pastebin) -------- Video Hosting -------- Web Hosting ---- Software -------- Calendars and Contacts -------- Chat -------- Document and Note Taking -------- Encryption -------- File Sharing -------- Metadata Removal -------- Password Managers -------- Web Browsers ------------ Browser Specific Tweaks ------------ Browser Specific Extensions ---- Operating Systems and Firmware -------- Desktop -------- Mobile -------- Routers I will update this thread and table of contents as the subsidiary topics are created.
    2 points
  14. A couple weeks ago an organization called intigriti had a hacking challenge where people were to exploit an XSS vulnerability in this page: https://challenge.intigriti.io/ Unfortunately the competition is over and it has been solved in numerous different ways, but they left the page up, so you can still go test your skills! In case they ever take that down you can still access the code for the challenge, as well as multiple solutions and explanations, here: https://blog.intigriti.com/2019/05/06/intigriti-xss-challenge-1/
    2 points
  15. In my recent class, we did a series of languages from different paradigms to get an understanding of how they are used, pros/cons, etc. Here is some code I wanted to share from a portion of my homework for anyone who hasn't seen LISP. All in all, it is a pretty fun language to tinker with that I may end up doing some more on my on down the road. ; Adds two numbers and returns the sum. (defun add (x y) (+ x y)) ; Returns the minimum number from a list. (defun minimum (L) (apply 'min L)) ; Function that returns the average number of a list of numbers. (defun average (number-list) (let ((total 0)) (dolist (i number-list) (setf total (+ total i))) (/ total (length number-list)))) ; Function that returns how many times an element occures in a list. (defun count-of (x elements) (let ((n 0)) (dolist (i elements) (if (equal i x) (setf n (+ n 1)))) n)) ; Returns the factorial of a number using an interative method. (defun iterative-factorial (num) (let ((factorial 1)) (dotimes (run num factorial) (setf factorial (* factorial (+ run 1)))))) ; Using a recursive method, this function returns the factorial of a number. (defun recursive-factorial (n) (if (<= n 0) 1 (* n (recursive-factorial (- n 1))))) ; This function calculates a number from fibonacci sequences and returns it. (defun fibonacci (num) (if (or (zerop num) (= num 1)) 1 (let ((F1 (fibonacci (- num 1))) (F2 (fibonacci (- num 2)))) (+ F1 F2)))) ; Takes a list and returns all elements that occur on and after a symbol. (defun trim-to (sym elements) (member sym elements)) ; Returns the ackermann of two numbers. (defun ackermann (num1 num2) (cond ((zerop num1) (1+ num2)) ((zerop num2) (ackermann (1- num1) 1)) (t (ackermann (1- num1) (ackermann num1 (1- num2)))))) ; This function defines test code for each previous function. (defun test () (print (add 3 1)) (print (average '(1 2 3 4 5 6 7 8 9))) (print (minimum '(5 78 9 8 3))) (print (count-of 'a '(a '(a c) d c a))) (print (iterative-factorial 5)) (print (iterative-factorial 4)) (print (fibonacci 6)) (print (trim-to 'c '(a b c d e))) (print (ackermann 1 1))) ; Calls the test function. (test)
    2 points
  16. In my DnD group we've always tracked initiative on a white board, and it's always been a pain in the ass. We'd write down the names of everyone in the encounter, take note of their initiative scores, rewrite the whole list in order, and then we'd do all damage calculation by hand. It took way too long, and was always very anticlimactic. We'd be rushing through a cave to some epic music and, at the peak of excitement, the DM shouts "You're greeted by 5 viscous ancient dragons!!" and then we'd have to pause for 5-10 minutes while we fumble around with our white board, and even then the encounter itself would be a bit clumsy as we haphazardly try to figure out damage and whose turn it was. No more! Now there's a tool which will do all of that for you! (though soon after finishing this program I found out there are dozens of free mobile apps that do the same thing...) This tool is Object Oriented, and it keeps track of Mob objects in a linked list. Here is a screenshot: Clicking the bottom 3 buttons creates popup dialogues that you can use to enter the information. Here is the code: Main.java: GUI.java Mob.java There are some small limitations: There is no healing button. What you can do instead is just enter a negative number for damage. I could have easily added a healing button with only a few lines of code overall, but I felt that it would clutter the UI a bit for something that is virtually identical to the damage button. The program does not distinguish between NPCs and players. The only downside of this is that if a player "dies" then it doesn't prompt them to do their death saves. Hopefully your DM can pay enough attention to notice when the player is skipped in the order and just ask them to do it themselves though.
    2 points
  17. Here is a bit of an incomplete program I started. Well, the code I post works, but I had planned to extend this out. This calculator has a GUI and takes into account order of operations. The only issues that I've had with it, is that output can be a little wonky when answers are negative (such as 1- 9 * 9). At some point when I have time to work on it again, my original plan was to build in the functionality to input an equation and allow the user to specify a range of values that X can hold, it would compute it and output all of the results. And of course to add in more operations such as trig functions, etc. Essentially my end goal at some point is a calculator that could take the place of my graphing calculator. Main.java Calculator.java ParseCalculation.java
    2 points
  18. I am currently about to finish a class in my course work that deals with digital logic at a very basic level. So, I would like to share a little bit of knowledge of what I have learned. Data Representation in a Computer Communicating digital can be traced back to the days of Samuel Morse and the invention of the telegraph. Communicating over long distances via wire required some sort of standardized system of communication. Samuel Morse developed the famous system that we know as Morse code to facilitate communication. On paper, the language is represented by a series of dots and dashes, coming from a speaking, it is represented by long (DAH) and short (DIT) beeps. By standard convention, a "dah" has a width of 3 "dits." (A) .- [DIT - DAH] (B) -... [DAH - DIT - DIT - DIT] While used in the telegraph, it was not implemented in computers but have become a real world pre-computer example of how information could be stored. Morse code is created to travel across a wire by turning current on and off along a wire, which is generally created by a telegraph operator tapping a metal paddle on to a metal surface.Essentially, just being a switch. Morse code didn't become the standard of data representation, instead binary logic was chosen instead. Representing information in 1s and 0s instead of long and short audio beeps. A "HIGH" voltage is generally considered to be represented by "1" or also known as "ON/TRUE." "LOW" voltage is represented by a "0" or "OFF/FALSE." Now, binary is more than just a convention, it is an actual way of doing mathematics. We conventionally use a "base ten", also known as the "decimal system" ( system of counting (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10). Binary is a "base two" system where there are essentially only two unique characters that make up the whole number system which can be repeated to make more complex number. Look at the bubble below: 0 (zero) 1 (one) 10 (two) 11 (three) 100 (four) Each individual symbol is dubbed a "bit" and can only represent 2 possible values. So "10" is 2 bits in width. A quick way to see how many possible values a certain number of bits can represent, we can do a quick calculation of 2 to the power of some bits. 2^2 = 4 00 01 10 11 2^3 = 8 000 (zero) 001 (one) 010 (two) 011 (three) 100 (four) 101 (five) 110 (six) 111 (seven) The bottom line is that information in computers can be represented as a switch or series of switched. Imagine we have a battery connected to 8 light bulbs with a switch between the battery and each light switch. So, we have 8 light bulbs and 8 light switches. Using binary, we can represent 2^8 numbers starting from zero by switching on lights. Light bulbs that are lit represent a 1, light bulbs that are not lit represent a 0. Computers at the very basic level are a system of switches that perform operations on switches to change the system's state. Boolean Algebra and Truth Tables Boolean was a man who had a goal of being able to relate human decision making to mathematical logic. He wanted to develop a mathematical way of expressing logic. Thus, he developed what we call Boolean Algebra. This form of algebra uses typical math symbols that we are all used to seeing, however they have a different meaning. In this form of math, a state of a machine, or a decision being made is equal to an equation of variables that take on certain behavior based on the state of inputs and their relation to one another. A gppd way to explain this is to take a look at an example and break it down. F = ab + c'b F is the output of an equation. On the right side of the equals sign, we have three variable (a b c). "ab" is an expression that represents multiplication, which in boolean algrebra is representative of "and." The addition symbol is representative of an "or." An apostrophe is means an inversion. Any value that is not inverted is assumed to represent true. An inversion of a value means false. This system will also assume that F is true. This is how we can read this. F is true if a and b are true or c is not true and b is true A quick table for reference: * AND + OR ' NOT Now, another way we can represent data is a more easy way is a truth table. In the follow link is a file named "truthtables.pdf" with three sample truth tables. Each column represents an input or output. The top row of each table is a label, then underneath is the state of that input or output. Schematics of a Basic Digital Circuit Now that some base information has been established, now basic circuits that a system can use to execute logic can be discussed. For creating digital logic circuits in class, I used Logisim, which is what I will be using for create examples. There are three main basic components in a digital circuit, these are made up of transistors. The construction of them from transistors is outside of the scope of this post. These three main components are the ones discussed in the previous section; AND, OR, and NOT. Here are two images, the first is how the gate are represented. The second image are the truth tables to explain the logical operation that is being performed by each type of gate. Using our boolean equation and a truth table is a quick way to prototype a digital circuit. Here is a drawing of logism of the circuit that would produce the same results as the equation "F = ab + c'b." Recreating this circuit in logism or a similar program, we can see that the behavior of this circuit matches what our truth table says. To do this for any boolean expression, simply take the inputs and connect them with their appropriate gates. In the case of "ab," the inputs named a and b are connected to the two input pins of the AND gate. The output is fed to an OR gate that is represented by the addition symbol. Input c is inverted by a NOT gate, the output of the NOT gate is fed into an input of another AND gate that takes another input that comes from b. The output of this second AND gate is also fed into the OR gate. If either of these AND gates outputs true, then the output of the circuit (F) will be true.
    2 points
  19. ether-vote A decentralized voting application using the Ethereum blockchain architecture. Features Initialize a collection of candidates who will be applying for a position Votes are stored on the blockchain No central authority is required to trust Goals This current version is a proof of concept. Voting systems can serve as a building block for many complex decentralized applications. In the future, the following goals will be completed: Rebuild the app using the Truffle framework Provide clear instructions for deploying the dapp to a testnet Add the ability to interact with the smart contract from the command line Implement a voting token (with a limited supply) into the smart contract Implement a payment system into the dapp that would allow users to buy/sell voting tokens Code EtherVote.sol pragma solidity ^0.4.11; contract EtherVote { mapping (bytes32 => uint8) public numberOfVotesReceived; bytes32[] public listOfCandidates; function EtherVote(bytes32[] candidates) { listOfCandidates = candidates; } function isValidCandidate(bytes32 candidate) returns (bool) { for(uint index = 0; index < listOfCandidates.length; index++) { if(listOfCandidates[index] == candidate) { return true; } } return false; } function getTotalVotesForCandidate(bytes32 candidate) returns (uint8) { require(isValidCandidate(candidate)); return numberOfVotesReceived[candidate]; } function setVoteForCandidate(bytes32 candidate) { require(isValidCandidate(candidate)); numberOfVotesReceived[candidate] += 1; } } .bowerrc { "directory": "web/vendor/" } bower.json { "name": "ether-vote", "appPath": "web", "version": "0.0.1", "dependencies": { "lodash": "~4.17.4", "bootstrap": "v4.0.0-alpha.6", "less": "~2.7.2" } } package.json { "name": "ether-vote", "version": "0.0.1", "devDependencies": { "ethereumjs-testrpc": "^4.1.1", "web3": "^0.20.1", "solc": "^0.4.16" } } Usage (Node) To retrieve the number of votes for a given candidate: contractInstance.getTotalVotesForCandidate.call('Holo'); To cast a vote for a particular candidate: contractInstance.setVoteForCandidate('Kurisu', {from: web3.eth.accounts[1]}); Installation ether-vote requires Node.js and bower to run. Step 1 - Install the frontend dependencies: bower install Step 2 - Install the node modules: npm install Step 3 - Run testrpc node_modules/.bin/testrpc This will generate 10 keypairs (public addresses / private keys) that each have 100 Ether for testing purposes. For example: EthereumJS TestRPC v4.1.1 (ganache-core: 1.1.2) Available Accounts ================== (0) 0x3853246f7dd692044b01786ea42a88197f6dfef9 (1) 0x1067092bee809c703ed33c11cc2ca3f3d3e33f1f (2) 0x4b9ad5d76fc3abe51d02fa9c631fe2e6dd21261a (3) 0xbe5dacc37242be5ca41baa25a88657e73fbae2c1 (4) 0x8afc23d930072c286c31a22d6ec5cb9330acd51e (5) 0x21deb9442d2ac8aefdeaf4521e568a98de3ebb6f (6) 0x39c9c3fffaff694388354aa40d22236ff102cb01 (7) 0x6927e56ae99f8a9531eaa5769486f0d9c67f1d07 (8) 0x65ad95852c58d7a9ab6177a55aa50f4c98507a83 (9) 0xb963574b692ace8f3f392531ba46788258d19eb6 Private Keys ================== (0) fb1e07512bfa729237496733dce0ba217356aaa5c14aecf3cecc317042bc77cc (1) 1b504d05041f1513c14dda6cfcced3b28ae5a47e33a75ce84a5d724adef69f6a (2) e5756fb44810101d141443a4f20d21dbb7ddfb79157a447721a3fc8a118934bc (3) bf811c983a80f53ec805bb956720946672a45e6739fe9d34f8099855f3658f17 (4) 681a0a2d42087966db7ca600f92c9b375f87b2e6dfae53e9358dbf54f3e26fc8 (5) b104ed383582580eae090a6d883307245d67d338db9e988312c28a30c61b543a (6) f74738475aef7b0340f902ea85c0900831b1e1b337bc0f0891e56540eed26491 (7) 96dfa361e52f3f45b24a058846ea6df844f8a89842ef83855309bb0c7827913f (8) 9cb8adf3b2e5026582b20f0c65aae2c2c4f6adb3e406cd3a52df93050a5b12fe (9) 4b152799a199aa7200432698d14aa80f970232ee0c97809e45b87880814dad65 HD Wallet ================== Mnemonic: drama aspect juice culture foot federal frequent pizza hawk giggle tenant happy Base HD Path: m/44'/60'/0'/0/{account_index} Listening on localhost:8545 Step 4.0 - Run node Step 4.1 - Include web3.js Web3 = require('web3'); web3 = new Web3(new Web3.providers.HttpProvider("http://127.0.0.1:8545")); Step 4.2 - Set the output of EtherVote.sol to a variable smartContract = fs.readFileSync('EtherVote.sol').toString(); Step 4.3 - Compile the contract using solc solc = require('solc'); compiledCode = solc.compile(smartContract); The output will return a JSON object that contains important information like the Ethereum Contract Application Binary Interface (ABI) and smart contract bytecode. For example: { contracts: { ':EtherVote': { assembly: [ Object ], bytecode: '6060604052341561000f57600080fd5b6040516103dc3803806103dc833981016040528080518201919050505b806001908051906020019061004292919061004a565b505b506100c2565b82805482825590600052602060002090810192821561008c579160200282015b8281111561008b57825182906000191690559160200191906001019061006a565b5b509050610099919061009d565b5090565b6100bf91905b808211156100bb57600081600............continued............', functionHashes: [ Object ], gasEstimates: [ Object ], interface: '[{"constant":true,"inputs":[{"name":"","type":"bytes32"}],"name":"numberOfVotesReceived","outputs":[{"name":"","type":"uint8"}],"payable":false,"stateMutability":"view","type":"function"},............continued............]', metadata: '{"compiler":{"version":"0.4.16+commit.d7661dd9"},"language":"Solidity","output":{"abi":[{"constant":true,"inputs":[{"name":"","type":"bytes32"}],"name":"numberOfVotesReceived","outputs":[{"name":"","type":"uint8"}],............continued............}]}', opcodes: 'PUSH1 0x60 PUSH1 0x40 MSTORE CALLVALUE ISZERO PUSH2 0xF JUMPI PUSH1 0x0 DUP1 REVERT JUMPDEST PUSH1 0x40 MLOAD PUSH2 0x3DC CODESIZE SUB DUP1 PUSH2 0x3DC DUP4 CODECOPY DUP2 ADD PUSH1 0x40 MSTORE DUP1 DUP1 MLOAD DUP3 ADD SWAP2 SWAP1 POP POP JUMPDEST DUP1 PUSH1 0x1 SWAP1 DUP1 MLOAD SWAP1 PUSH1 0x20 ADD SWAP1 PUSH2 0x42 SWAP3 SWAP2 SWAP1 PUSH2 0x4A JUMP JUMPDEST POP JUMPDEST POP PUSH2 0xC2 JUMP JUMPDEST DUP3 DUP1 SLOAD DUP3 DUP3 SSTORE SWAP1 PUSH1 0x0 MSTORE PUSH1 0x20 ............continued............ ', runtimeBytecode: '60606040526000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff1680630d8de22c1461006a5780633898ac29146100ab5780638c1d9f30146100ec57806392d7df4a1461012b578063dcebb25e1461016a575b600080fd5b34156100............continued............', srcmap: '2', srcmapRuntime: '', sourceList: [ '' ], sources: { '': { AST: [ Object ] } } } } Step 5.0 - Create an ABI definition object by passing in the ABI definition as JSON from the compiledCode object that was created in Step 4.3. Then, pass this ABI definition object to the web3.eth.contract function in order to create an EtherVote object. abiDefinition = JSON.parse(compiledCode.contracts[':EtherVote'].interface); EtherVoteContract = web3.eth.contract(abiDefinition); Step 5.1 - Save the byteCode object from the compiledCode object to a variable, as we will use this when calling our EtherVoteContract's prototypical .new() function byteCode = compiledCode.contracts[':Voting'].bytecode; Step 5.2 - Deploy the smart contract to the Ethereum blockchain by invoking EtherVoteContract.new(...), which takes in two parameters: The first parameter are the values for the constructor - in this case, our list of candidates to vote for The second parameter is an object that contains the following properties: Property Description data The compiled bytecode that will be deployed to the Ethereum blockchain from The address that will deploy the smart contract gas The amount of money that will be offered to miners in order to include the code on the blockchain deployedContract = EtherVoteContract.new(['Kurisu', 'Holo', 'Rin', 'Haruhi', 'Mitsuha'], { data: byteCode, from: web3.eth.accounts[0], gas: 4700000 } ); Step 5.3 - Create an instance of the smart contract by invoking the at function on the EtherVoteContract object, passing in the address property from the deployedContract object that was created in Step 5.2 contractInstance = EtherVoteContract.at(deployedContract.address); Congratulations, you are now ready to interact with the dapp! (See: Usage above)
    2 points
  20. This is a program that I wrote a few years ago in order to test a theory that I read online. I read on some website that you could calculate the value of pi by throwing hot dogs on the floor which absolutely blew me away. I couldn't believe it, so I decided to test it. I wrote a program to simulate throwing 1 billion hot dogs on the floor and by golly let me tell you, they're right. Here's how: (Technically this works with any stick-like object.) Let x be the length of our object (hot dog in our case). You must then draw lines on the floor perpendicular to the direction you're facing which are all x length apart. This elegantly drawn image demonstrates what I mean flawlessly: The number of hot dogs which landed on a line divided by the total number of hot dogs thrown is an approximation for pi. Like I said, I simply refused to believe that something so simple could be possible so I wrote a program to simulate the process: #!/usr/bin/perl -w use strict; my($dist, $lower, $upper, $lenComponent, $approx); my $len = 6; my $throws = 1000000; #CHANGE TO WHAT YOU WANT my $intersects = 0; for(1..$throws){ $dist = rand(180); #arbitrary maximum throwing distance $lenComponent = sin(rand(6.28318530718))*$len; #trig with up to 2pi radians rotation $lower = $dist - ($lenComponent/2); $upper = $lower + $lenComponent; for(my $line = 0; $line<=($dist+$len); $line+=$len){ if($line>=$lower and $line<=$upper){ ++$intersects; last; } } } $approx = (1/$intersects)*$throws; print "Pi is approximately: $approx"; And I ran the program overnight with 1 BILLION hot dogs, which yielded this result: 3.14154932843791 VS 3.14159265358979 Error: 0.00004332515 Wowza! I also wrote a second version of the program which uses multi-threading to throw the hot dogs faster. It was actually a neat exercise because I wrote it such that all of the threads can edit the same variable which counts the total number of intersections. Code: #!usr/bin/perl -w use strict; use threads; use threads::shared; my $intersects :shared = 0; my $throws = 10000000; my @threads = (); sub hotdog{ my($dist, $lenComponent, $lower, $upper); my $len = 1; for(1..$throws){ $dist = rand(5); #arbitrary maximum throwing distance $lenComponent = sin(rand(6.28318530718))*$len; #trig with up to 2pi radians rotation $lower = $dist - ($lenComponent/2); $upper = $dist + ($lenComponent/2); for(my $line = 0; $line<=($dist+$len); $line+=$len){ if($line>=$lower and $line<=$upper){ lock($intersects); ++$intersects; last; } } } } for(1..10){ push (@threads, threads->create(\&hotdog)); } $_->join foreach @threads; print "Pi is approximately: ".(($throws*scalar(@threads))/$intersects);
    2 points
  21. IMO this is only a mind map of passive reconnaissance resources. And that's only half of the first step of Lockheed Martin's cyber killchain. Not to diminish the usefulness of the link at all. As far as passive reconnaissance goes the resources mentioned look quite comprehensive. You could, for example, prepare an initial dossier report to hand off to another active recon team so they could map the company profile to a network topology. This step is indispensable for a large APT, but to say that this covers all the steps is an exaggeration. Because it throws in links to malware analysis resources, and exploit archives, one could get confused about this link and think it was appropriately covering all the resources, but it's by no means its strong point. Again, I don't intend to diminish the usefulness of this link at all! Passive information gathering is the most important step to a large scale APT, yet it's the most glossed over subject in every security course! If you check out Sparc FLOW's "how to hack like a god" and some of his other books, he actually gives some emphasis on casing your target. AND NO WONDER! His books are actually just case studies!
    2 points
  22. Download all of the released NSA documents (continuously updating) with two scripts. Very hacky, but gets the job done. DEPENDS ON LYNX. (Why? Because I'm lazy) $ apt install lynx nsadl.sh #!/bin/bash echo 'Scraping links from Primary Sources...' lynx -dump "https://www.eff.org/nsa-spying/nsadocs" | grep "https://www.eff.org/document" | awk '/http/{print $2}' > links echo 'Done. Links saved as "links.txt"' echo 'Downloading .pdf documents using "links.txt" -- this may take awhile...' while read line do name=$line sh scraper.sh $name done < links echo 'All done!' scraper.sh #!/bin/bash STR="`wget --quiet -O - $1 | grep -Eo 'https://www.eff.org/files/[0-9]+/[^"]+\.pdf';`" wget --no-clobber --quiet $STR Usage: $ sh nsadl.sh; echo 'Have fun!'
    2 points
  23. Just a heads up for people curious. I just pushed some commits for a new Makefile for compiling on x86_64 using the GCC tool chain. To make it work, your going to have to install arm's gcc toolchain. You can get the linux binaries from arm. I then moved them on my machine to `/usr/bin`. However, you can change the path to where you want them to be.
    1 point
  24. Check out https://www.echosec.net/ They aren't free though. https://www.maltego.com/ (credz to @killab for showing me that one) might also have a feature-set that overlaps with what GeoCreepy offered.
    1 point
  25. I have found often that non profits will pop up that display election results some of the states have already got their info off this one but it may be worth a try https://voteref.com/
    1 point
  26. A lot of people use "people" search engines and Google dorks to find people or information about people, but you can actually find out quite a bit of information via public registries. Consider some of these: The Knot - Wedding Registry Search RegistryFinder - Baby Shows and Graduation Search MyRegistry - Wedding, Baby, and Gift List Search Amazon - Registries for Any Occasion Search Bed, Bath, and Beyond - Gift Registry Search The Bump - Baby Registry Search You can also find out PII of anyone in the United States who is registered to vote, by looking at local election registries. Does anyone know other registries to include?
    1 point
  27. Intro I am going to be writing this multi-part series in authoring "professional" blu-rays. I am leaving the word professional in quotes because nothing truly can replace some of closed industry tools that are used and some of the skill involved. With that said, we can get close, and in some cases produce higher quality blu-rays than what you'd purchase at a store. Let's face it. We live in a digital and streaming age - between services that put entire libraries at your fingertips for a monthly fees and the ability to host your own media servers, physical media is becoming more and more antiquated. I'm one of those people who has terabytes upon terabytes of data with my own media server. Little can come close to the automation you can achieve with tools like Sonaar, Radaar, Jackett, etc. So what, then, is the motiviation for blu-ray authoring? Or even acquiring blu-rays in the first place? There are still numerous reasons for physical media, and better yet, authoring your own physical media: Collections Gifts Historical Archives (streaming services revise or remove older media all the time) I am a collector. I really enjoy collecting physical copies of media that I really enjoy. In addition to this, it's also great to author blu-rays and give them out as gifts to people. There might be some ethical or legal questions if you pirate content and give them out to people, so use your best judgment. There is also interesting ethical questions relating to making a copy of something that you own, and the legitimacy of sharing copies with people. Again, these types of topics are beyond the scope of this thread, so just use your best judgment. As a collector, I have become immensely frustrated with publishers who sell blu-rays that are, for lack of a better term, incomplete. Here are some issues I have run across when paying anywhere from $30 to $150 for collectible blu-rays: The artwork in the case is poor The paper and ink used to house the artwork is low quality The case itself is cheap The disc art is lacking The main menu is some low-quality static background image The subtitles have many typos The subtitles font types are horrific The encoding is objectively bad Why? Why must you do this to me? I am trying to pay you money, and you send me crap. Well, no more. In this series, we are going to cover the following topics which will allow you to author "professional" blu-rays: Blu-Ray Specifications and Media Info Ripping DRM Removal and Region Lock Scrubbing Cleaving Subtitle Authoring Audio Encoding Video Encoding Transport Muxing and Remuxing Main Menu Authoring Burning BD Structure and Metadata Editing Cover Design and Printing for Cases Disc Design and Printing for Discs Examples The following are some examples of some blu-rays that I have authored. This is about the level of quality you can expect to produce by the time you finish this series. Cases and Artwork: Blu-ray Menus: Requirements Operating System: Windows Software: mediainfo (media info), AnyDVD Ripper (ripping, DRM, region lock), MKVToolNix / mkvextract (cleaving), Aegisub (subtitle authoring), eac3to (audio encoding), ffmpeg / handbrake (video encoding), tsmuxer (transport (re)muxing), Nero Video / multiAVCHD (main menu authoring), imgburn (burning), bdedit (BD structure + metadata editing), Photoshop / Nero CoverDesigner (cover design and printing), Photoshop / Epson Print CD (disc design and printing). Some of these tools can be used on Mac and Linux, but unfortunately, not all of them. I have found all of the above tools essential to my authoring workflow, and unfortunately, most are limited to Windows. If anyone knows of any Linux alternatives that would allow us to achieve similar output, I would love to know. What's with the weeb stuff Listen, I won't convert you to my degen ways. However, this series will author an anime titled Shouwa Genroku Rakugo Shinjuu that I want in my collection. This will serve as a perfect example to cover some of the topics that we wouldn't need to necessarily cover with traditional Western media. You'll be able to take what you learn from this series, and author your own non-degen content to your heart's content. I am wanting to author some more media for my collection, so hopefully the remaining tutorials won't take me too long. Once I write these out, I will also make video tutorials. Stay tuned!
    1 point
  28. When I was in high school I got really into flash animation and I used to make animated avatars and videos for an old site. Back then I used Adobe Flash CS3 Professional, but it was a pirated copy and the program itself was extremely expensive (as is the modern version). Luckily, there's an extremely similar program called Macromedia Flash 8 that's fully free! (Only compatible with Windows unfortunately) Now I hear you asking "Freak, isn't Flash a dead technology that isn't used anymore?" Well it isn't supported on modern browsers, but that doesn't mean it's dead. You can still watch flash animations and play flash games in a flash player or something like VLC, but you can also just export any animations you make as an MP4 instead of the traditional SWF. https://macromedia-flash-8.soft32.com/
    1 point
  29. This is a pro-tip/PSA for my fellow keyboard enthusiasts: if you're not using double-shot PBT keycaps, you are not living life correctly! I have what I thought was the perfect keyboard, namely the G.Skill RIPJAWS KM780R RGB. With Cherry MX Red switches, 6 macro keys and a profile that wouldn't look out-of-place on a Klingon battleship, it's a most suitable companion during long coding sessions. Before I heard of PBT keycaps, though, I never thought the cheap ABS plastic caps that came with the KM would be a problem. After looking into PBT caps, I started to notice the very real problems. The shine that develops from the accumulation of oils from fingertips was one thing. The bigger issue was the frequent slippage caused by the super smooth, non-textured surfaces. I don't need to be a speed typist most of the time, but I can imagine how constant slippage would be a major obstacle for competitive gamers. After some contemplation, I decided to shell out about $100 for a Razer Huntsman TE for its compact design, TKL layout, brand name and, most importantly, the higher-quality, double-shot PBT keycaps. I was almost ready to buy before the reservations hit me. For one thing, I'd really miss my macro keys, as I do use them regularly; for another, the multiple reviews for the Huntsman TE suggest its extremely light actuations would make it unsuitable for regular everyday typing. It's a gaming keyboard through and through. At this point, I was wondering if it were possible to keep my KM keyboard but just swap out the keys. G.Skill does have replacement keys, but they're also made of the cheap ABS plastic. Looking around, I found these Ducky caps on eBay. (Note: the Ducky spacebar is extra long and may not fit your board, so you may have to be stuck with your old spacebar.) After about two weeks of using them, I can honestly say this was one of the best shopping decisions I ever made. The textured caps made my slippage problem all but disappear, and I can actually enjoy typing again. On top of that, I only spent about 1/3 of what I'd need to spend for another whole keyboard when my current one is still working perfectly fine in every other way. In summary, if you're in the market for a new keyboard, make sure they come with PBT keycaps. If you're not but currently using ABS caps, get yourself some PBTs. Your fingers will thank you.
    1 point
  30. #From the site: Offensive Security Proving Grounds (PG) are a modern network for practicing penetration testing skills on exploitable, real-world vectors. With the new additions of Play and Practice, we now have four options to fit your needs. Which PG edition is right for you?
    1 point
  31. Examples of IT security frameworks COBIT Control Objectives for Information and Related Technology (COBIT) is a framework developed in the mid-90s by ISACA, an independent organization of IT governance professionals. ISACA currently offers the well-known Certified Information Systems Auditor (CISA) and Certified Information Security Manager (CISM) certifications. This framework started out primarily focused on reducing technical risks in organizations, but has evolved recently with COBIT 5 to also include alignment of IT with business-strategic goals. It is the most commonly used framework to achieve compliance with Sarbanes-Oxley rules. ISO 27000 series The ISO 27000 series was developed by the International Standards Organization. It provides a very broad information security framework that can be applied to all types and sizes of organizations. It can be thought of as the information security equivalent of ISO 9000 quality standards for manufacturing, and even includes a similar certification process. It is broken up into different substandards based on the content. For example, ISO 27000 consists of an overview and vocabulary, while ISO 27001 defines the requirements for the program. ISO 27002, which was evolved from the British standard BS 7799, defines the operational steps necessary in an information security program. Many more standards and best practices are documented in the ISO 27000 series. ISO 27799, for example, defines information security in healthcare, which could be useful for those companies requiring HIPAA compliance. New ISO 27000 standards are in the works to offer specific advice on cloud computing, storage security and digital evidence collection. ISO 27000 is broad and can be used for any industry, but the certification lends itself to cloud providers looking to demonstrate an active security program. NIST Special Publication 800-53 The U.S. National Institute of Standards and Technology (NIST) has been building an extensive collection of information security standards and best practices documentation. The NIST Special Publication 800 series was first published in 1990 and has grown to provide advice on just about every aspect of information security. Although not specifically an information security framework, other frameworks have evolved from the NIST SP 800-53 model. U.S. government agencies utilize NIST SP 800-53 to comply with the Federal Information Processing Standards' (FIPS) 200 requirements. Even though it is specific to government agencies, the NIST framework could be applied in any other industry and should not be overlooked by companies looking to build an information security program. NIST Special Publication 800-171 NIST SP 800-171 has gained in popularity in recent years due to the requirements set by the U.S. Department of Defense that mandated contractor compliance with the security framework by December 2017. Cyberattacks are occurring throughout the supply chain, and government contractors will find their systems and intellectual property a frequent target used to gain access into federal information systems. For the first time, manufacturers and their subcontractors now have to implement an IT security framework in order to bid on new business opportunities. NIST SP 800-171 was a good choice for this requirement as the framework applies to smaller organizations as well. It is focused on the protection of Controlled Unclassified Information (CUI) resident in nonfederal systems and organizations, which aligns well with manufacturing or other industries not dealing with information systems or bound by other types of compliance. It may not be a good fit by itself for industries dealing with more sensitive information such as credit cards or Social Security data, but it is freely available and allows for the organization to self-certify using readily available documentation from NIST. The controls included in the NIST SP 800-171 framework are directly related to NIST SP 800-53, but they are less detailed and more generalized. It is still possible to build a crosswalk between the two standards if an organization has to show compliance with NIST SP 800-53 using NIST SP 800-171 as the base. This allows a level of flexibility for smaller organizations that may grow over time as they need to show compliance with the additional controls included in NIST SP 800-53. NIST Cybersecurity Framework for Improving Critical Infrastructure Cybersecurity The NIST Cybersecurity Framework for Improving Critical Infrastructure Cybersecurity is yet another framework option from NIST. It was recently developed under Executive Order (EO) 13636, "Improving Critical Infrastructure Cybersecurity" that was released in February 2013. This standard is different in that it was specifically developed to address U.S. critical infrastructure, including energy production, water supplies, food supplies, communications, healthcare delivery and transportation. These industries have all found themselves targeted by nation-state actors due to their strategic importance to the U.S. and must maintain a higher level of preparedness. The NIST Cybersecurity Framework differs from the other NIST frameworks in that it focuses on risk analysis and risk management. The security controls included in this framework are based on the defined phases of risk management: identify, protect, detect, respond and recovery. These phases include the involvement of management, which is key to the success of any information security program. This structured process allows the NIST Cybersecurity Framework to be useful to a wider set of organizations with varying types of security requirements. CIS Controls (formerly the SANS Top 20) The CIS Controls exist on the opposite spectrum from the NIST Cybersecurity Framework. This framework is a long listing of technical controls and best practice configurations that can be applied to any environment. It does not address risk analysis or risk management like the NIST Cybersecurity Framework, and is solely focused on hardening technical infrastructure to reduce risk and increase resiliency. The CIS Controls are a welcome addition to the growing list of security frameworks because they provide direct operational advice. Information security frameworks can sometimes get caught up on the risk analysis treadmill but don't reduce overall organizational risk. The CIS Controls pair well with these existing risk management frameworks to help remediate identified risks. They are also a highly useful resource in IT departments that lack technical information security experience. HITRUST CSF It is well known that the HITECH/HIPAA Security Rule has not been successful in preventing data breaches in healthcare. The original HIPAA compliance requirements were written in 1996 and set to apply to a broad set of technologies and organizations. More than 230 million people in the U.S. have had their data breached by a healthcare organization, according to the Department of Health and Human Services. The overly general requirements included HIPAA and the lack of operational direction as partly to blame for this situation. HITRUST CSF is attempting to pick up where HIPAA left off and improve security for healthcare providers and technology vendors. It combines requirements from almost every compliance regulation in existence, including the EU's GDPR. It includes both risk analysis and risk management frameworks, along with operational requirements to create a massive homogenous framework that could apply to almost any organization and not just those in healthcare. The only bad choice among these frameworks is not choosing any of them. HITRUST is a massive undertaking for any organization due to the heavy weighting given to documentation and processes. Many organizations end up scoping smaller areas of focus for HITRUST compliance as a result. The costs of obtaining and maintaining HITRUST certification adds to the level of effort required to adopt this framework as well. However, the fact that the certification is audited by a third party adds a level of validity similar to an ISO 27000 certification. Organizations that require this level of validation may be interested in the HITRUST CSF. The beauty of any of these frameworks is that there is overlap between them so "crosswalks" can be built to show compliance with different regulatory standards. For example, ISO 27002 defines information security policy in section 5; COBIT defines it in the section "Plan and Organize;" Sarbanes-Oxley defines it as "Internal Environment;" HIPAA defines it as "Assigned Security Responsibility;" and PCI DSS defines it as "Maintain an Information Security Policy." By using a common framework like ISO 27000, a company can then use this crosswalk process to show compliance with multiple regulations such as HIPAA, Sarbanes-Oxley, PCI DSS and GLBA, to name a few. IT security framework advice The choice to use a particular IT security framework can be driven by multiple factors. The type of industry or compliance requirements could be deciding factors. Publicly traded companies will probably want to stick with COBIT in order to more readily comply with Sarbanes-Oxley. The ISO 27000 series is the magnum opus of information security frameworks with applicability in any industry, although the implementation process is long and involved. It is best used, however, where the company needs to market information security capabilities through the ISO 27000 certification. NIST SP 800-53 is the standard required by U.S. federal agencies but could also be used by any company to build a technology-specific information security plan. The HITRUST CSF integrates well with healthcare software or hardware vendors looking to provide validation of the security of their products. Any of them will help a security professional organize and manage an information security program. The only bad choice among these frameworks is not choosing any of them. Source
    1 point
  32. @cwade12c Your 4-screen setup puts mine to shame. Are they all connected to the same graphics card? The specs for my rig: The prices were accurate as of 2019.
    1 point
  33. @cwade12cIts hard to save there is for sure a favorite, they are all favorite in their own way. The MBP 2015, I've got to give it to Apple on this, the best touch pad experience. The iBook G4, while it ain't the fastest, something about those old keyboard, it by far has my favorite keyboard. The netbook is great for light work loads or just work involving SSHing into a server while on the couch. But I do often spend most of the time on the MBP 2015 because that is where I do all of my school work.
    1 point
  34. Shodan is crazy powerful. My advice in using it would be: always think about it, before engaging in your next action.
    1 point
  35. https://www.newsweek.com/25-free-websites-learn-new-skills-youtube-1617923 Some of these will be known to you, others might not. For paid learning, I've been on Udemy for two years now and have purchased over 20 courses. They have sales several times throughout the year that allow you to buy a $200 course for like $15. You have lifetime access to the material.
    1 point
  36. What are some dorks and APIs that you find useful for username/profile gathering? Post them all here! Here's a couple to get started. Amazon Usernames: https://www.google.com/search?q=site:amazon.com+%3Cusername%3E Github Usernames: https://api.github.com/users/%3Cusername%3E/events/public
    1 point
  37. College today is largely a scam (speaking as someone with a Bachelors, Masters and seriously contemplating starting a PhD). Unless you're going into STEM or a field that by law requires a degree, I actually don't see the point vis-à-vis the debt incurred. The democratization of access to online education has changed the game. I also subscribed to Brilliant Premium for a year. It's novel, good, but not worth a resub imo.
    1 point
  38. So a few months ago, I heard a podcast where a person was talking about how helpful it is to do something as simple as implement an HTTP server in C. So, I decided to embark on this quest when school got a little more quieter for the summer (I am just taking a light load). I ended up doing this for a few reasons. First, as a learning experience. How better to get to know HTTP and how websites work than implementing a web server? Next was dog fooding my own code. Making something I can use. I could get the experience of writing it, but also the experiences of using my own code. And of course, implementing my own features natively in C. Lastly, I figured it would make an interesting resume project. Why C? Well, its a little closer to the metal and requires the user to get more intimate with the inner workings. Here is the code on my github. Note, as of this posting, I still have some cleaning up to do. But it passes all memory checks on valgrind and seems to be running fine. It is single threaded and does not yet use non-blocking IO. https://github.com/martintc/HttpServer Website I currently have it deployed to for testing: http://martintc.tech Thing I plan to do and improve on (and lessons learned): Implementing my own garbage collector to simplify the memory model. handling PUT request methods. Implement SSL/TLS Implement multi-threaded and non-blocking IO
    1 point
  39. @cwade12c I also have been a long time user. Actually, you guys got me into arch and such when I was in highschool all those years ago. But yea, recently have gotten on the BSD train. What do I like about it? It is stricter to adhering to UNIX philosophy and is a direct descendant of UNIX, unlike Linux which was a clean-room re-implementation influenced by minix. The system designs are simpler. All of the BSDs have great documentation compared to 99% of linux distributions. The BSDs are each their own independent OS. Binary compatibility is not the same across and each have taken their own routes (for instance, DragonFly BSD is transitioning to a microkernel). The major selling point for me is they also own their stack from bootloader to kernel to userland, which makes it a really solid cohesive system. In contrast, the linux kernel is its own independent project of GNU userland tools, they just happen to so benefit by working together. The communities are also close knit compared to Linux and less toxic. Within a couple of weeks of running NetBSD, I was working with the audio dev of NetBSD to test a patch of their for a machine of mine. I do believe their developers (BSD in general) are much more involved in their communities. The ports systems are great. My favorite by far is pkgsrc (NetBSD project) since it can run across multiple operating systems. I actually use pkgsrc and pkgin on Mac OS Big Sur as a replacement for brew and macports. The not so great part is hardware support. NetBSD is nice because all drivers are shipped with the generic image and NetBSD does not take a performance hit when unnecessary drivers are loaded into the kernel (where as FreeBSD and Linux are according to a few NetBSD devs I talked to). So that was on the easiest hardware wise to find out if your system is supported. The main issue would be wifi and GPUs. FreeBSD for sure does not have any 802.11 ac support (Adrian Chadd is working on it slowly), NetBSD and OpenBSD have a few devices that are supported for 802.11 ac. Otherwise, have a card with b/g/n that is supported or have a wifi dongle ready that is supported. The patch I tested for the Audio netbsd dev was for my Macbook Pro. I've got a 2015 that is dual booted with Big Sur (mostly for school stuff) and NetBSD. I dont fully understand the audio workings, but NetBSD was defaulting to channel 4 when it should have defaulted to channel 2. So the dev made a patch to check to make sure channel 2 was the default and if not, to make it so. My experience with OpenBSD has been on an old iBook G4 that I acquired last year. OpenBSD has great legacy powerpc support. It is a great system and Theo has done a great job with it since he forked from NetBSD in the early 90s. CWM is interesting and praised by users, if your not familiar, CWM is a window manager made by the OpenBSD community. And of course, OpenBSD has a legacy of great contributions like creating OpenSSH, Doas, LibreSSL, etc. I have also used it as a webserver. Their in-house HTTP client is nice and it integrates well with acme-client for automated LetsEncrypt SSL certs.
    1 point
  40. So far I've used this protocol for a smart fridge and a bilirubin measurement device (for testing Jaundice). Because popular Arduino Bluetooth interfaces happen to act as media independent serial lines, this will also work with Bluetooth. http://www.electronica60norte.com/mwfls/pdf/newBluetooth.pdf Responses are returned in the 'value' or 'error' parameters during reads. The 'value' parameter is used as the source during writes. import serial import struct import time params = ('option', 'pin', 'value', 'error') ser = serial.Serial('COM3') # opening a serial connection to arduino over USB will reset the device ser.read(1) # wait for ready signal so we know that arduino came back online. ser.write(struct.pack('<hhhh', 0, 0, 0, 0)) response = struct.unpack('<hhhh', ser.read(8)) print(dict(zip(params, response)) /* * ComPacket Data Structure * >option: 0 = Analog Read, 1 = Digital Read, 2 = Analog Write, 3 = Digital Write * >pin: pin to read/write from * >value: Arduino will return pin value to this variable. Leave empty on request * >error: Arduino will return an Error code is there's a problem with the request, or reading a pin value */ enum ComPacket_option {A_READ, D_READ, A_WRITE, D_WRITE}; enum ComPacket_error {INVALID_OPTION, INVALID_READ_PIN, INVALID_WRITE_PIN}; struct ComPacket { enum ComPacket_option option; int pin; int value; enum ComPacket_error error; } buf; void ComPacket_zero(ComPacket *packet) { packet->option = (enum ComPacket_option) 0; packet->pin = 0; packet->value = 0; packet->error = (enum ComPacket_error) 0; } void ComPacket_handleRead(ComPacket *packet) { if (packet->option == A_READ) { packet->value = analogRead(packet->pin); return; } if (packet->option == D_READ) { packet->value = digitalRead(packet->pin); return; } } void ComPacket_handleWrite(ComPacket *packet) { if (packet->option == A_WRITE) { analogWrite(packet->pin, packet->value); return; } if (packet->option == D_WRITE) { digitalWrite(packet->pin, packet->value); return; } } void ComPacket_handleError(ComPacket *packet, enum ComPacket_error error) { ComPacket_zero(packet); packet->error = error; } void ComPacket_processPacket(ComPacket *packet) { switch (packet->option) { case A_READ: case D_READ: ComPacket_handleRead(packet); break; case A_WRITE: case D_WRITE: ComPacket_handleWrite(packet); break; default: ComPacket_handleError(packet, INVALID_OPTION); } } /* * ******************* END COMPACKET */ // Use Serial1 for TX/RX pins /* * https://www.arduino.cc/reference/tr/language/functions/communication/serial/ * Serial, Serial1, Serial2 and Serial3 are available. See reference for which you need */ auto SerialDevice = Serial; void setup() { // put your setup code here, to run once: ComPacket_zero(&buf); SerialDevice.begin(9600,SERIAL_8N1); // wait for serial port to connect. Needed for native USB while (!SerialDevice); SerialDevice.write(1); } void loop() { // check if data available if (SerialDevice.available() > 0) { // if so process packet and send back result SerialDevice.readBytes((char*)&buf, sizeof(ComPacket)); ComPacket_processPacket(&buf); SerialDevice.write((char*)&buf, sizeof(ComPacket)); SerialDevice.flush(); } }
    1 point
  41. For Christmas I'm making my dad a RetroPie Raspberry Pi so he can play all of his old DOS games. I also found an archive online that has 1405 of them for free http://www.abandonia.com/en/game/all. As far as I can tell, however, they have no option to download all of them at once. You have to sift through 141 different pages and click a series of links to download all of them individually. For that reason I wrote this program to do the dirty work and download them all. ##!/usr/bin/perl -w use strict; use LWP::UserAgent; my($ua, $response, $mainContents, $thisLink, $check, $page, $thisContents, $downloadLink, $name); my @links = (); $ua = LWP::UserAgent->new( protocols_allowed => ['http', 'https'], timeout => 10, agent => "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:47.0) Gecko/20100101 Firefox/47.0", #Necessary otherwise 403 forbidden. ); $page = 0; while(1){ $response = $ua->get('http://www.abandonia.com/en/game/all?page='.int($page)); if($response->is_success){ $mainContents = $response->decoded_content; while($mainContents =~ /<a href=\"\/en\/games\/(.*?)\.html\"/g){ $thisLink = $1; if(scalar(@links) == 0){ push(@links, $thisLink); }else{ $check = 1; for(my $i = 0; $i<scalar(@links);++$i){ if($thisLink eq $links[$i]){ $check = 0; last; } } if($check){ push(@links, $thisLink); } } } if($mainContents =~ /current\">(.*?)<\/strong/g){ if($page - $1 == 0){ last; } } $page++; }else{ die $response->status_line; } } for(my $i = 0; $i<scalar(@links);++$i){ $downloadLink = &getDownloadLink($links[$i]); if($downloadLink =~ /\?game=(.*?)\&/){ $name = $1; } $ua->get($downloadLink, ':content_file' => $i." - ".$name.'.zip'); } sub getDownloadLink{ $response = $ua->get('http://www.abandonia.com/en/games/'.$_[0]); if($response->is_success){ $thisContents = $response->decoded_content; if($thisContents =~ /game_downloadpicture\"><a href=\"\/en\/downloadgame\/(.*?)\">/){ $response = $ua->get('http://www.abandonia.com/en/downloadgame/'.$1); if($response->is_success){ $thisContents = $response->decoded_content; if($thisContents =~ /files\.abandonia\.com\/download\.php(.*?)\"/){ return "http://files.abandonia.com/download.php".$1; } }else{ die $response->status_line; } } }else{ die $response->status_line; } } It's worth noting that all of their direct download links change on a timer, so you couldn't actually just make a big list of links that could replace this program. This program retrieves the links as it downloads them so they'll never be expired. I also sort of made this in a strange way. I had the program go to every page on the "All Games" list and find each of the links, but after I already had written most of the program this way, I learned that all of the download links are in the source code of URLs that are all in the format "http://www.abandonia.com/en/downloadgame/ ####" so theoretically I could have just made like one simple for loop and go through a certain number of numbers, but I noticed that some of the games have a number code that is far larger than 1405 which is the number of games that the site claims to have, so many of the links of that format might not be valid, and there's no way to know what the upper bound should be. This program automatically clicks through all of the different pages on the list, so even if they add more games in the future, this program would still get all of them (barring any formatting changes that might break my regex). For those reasons I still kind of like my way of doing it. Enjoy. Edit: Just as an update, now that I've downloaded them all it looks like there's actually only 1,134 games. I'm fairly certain that this is because a few of the games on their list are not free (such as "Alien Rampage"). But 1,134 games is still pretty good imo.
    1 point
  42. Introduction Before the compiler, there was the assembler. The invention of the assembler revolutionized early computers. Previous to the assembler, instructions were submitted to a computer physically by some action as throwing a physical switch. The pre-assembler world meant programming a computer in the machine's own native tongue, machine code. A tricky combination of countless 1s and 0s for even the simplest calculations. While, most challenges today do not require the use of assembly, it is still vital to learn. Through my learning, I have learned a deeper understanding of what is going on at the lower level. It is refining the process of thinking through solving challenges at a higher language and giving concrete ideas to what an array, stack, or other data structures look like in the CPU's world. Tool The tool used is one that my university uses. It is called PLPTool. It is an IDE and simulation environment for assembly based on the MIPS Instruction set. It does not support the full MIPS instruction set, but 27 instructions that are most vital in MIPS. It is an educational tool. PLPTool is written in Java, so to run you will need a jvm/jre to run. From the site, they have executables for linux, mac, and windows. However, beaware that if your running java that is greater than version 8, it will not recognize this. If this is the case, just download the jar and run it from the command prompt/terminal. java -jar <JAR File> Link to PLPtool site: http://progressive-learning-platform.github.io/home.html PLPtool comes with a lot of ways to visualize and simulate actions. There is a tool built in to view the registry file, givings a look at values held within registers. There is an LED tool to see LEDs that are lit up, and switches to use switches to interact with the program. These are just a few examples. Below are two images. Registers Registers can be thought of as just storage locations. PLPTool has 9 temporary registers that we will use for an introduction to assembly with this tool. These registers are $t0-$t9. Registers in PLPTool will typically hold a maximum value of 'oxfffffffff' in hexidecimal. That is 8 'f's. A converter can be used to find the decimal value. Inputs into registers can come in several different forms. For this tutorial, I will focus on 3. We can define values for registers using binary, decimal and hexadecimal. If you are unfamiliar with these three numbers systems, I would recommend reading about those systems before continuing with this tutorial. Load Immediate Instruction The first instruction to learn is called the load immediate instruction. This instruction is more of like a function of 2 more basic functions, but I will not cover that here. The load immediate instructions takes as arguments a register and a value. When it is ran, the value will be placed in the register defined. It takes two clock cycles for this instruction to complete. Think of it is as the instruction running twice. The program above shows the syntax. The command is li, followed by the register and a value. This is decimal value of 1. Accompanied by it is another image that shows what the registry file looks like after it is ran. The value of 1 is stored in register $t0. Special Memory Locations Special memory locations exists. These special locations in memory are often used to interface with I/O devices. If you are familiar with C, the locations look like a memory location output of a pointer. The addresses are in hexidecimal. For now, we will focus on the LED. In order to use the LEDs, we must first set a register to the value of the memory address. When we access the register, we will be accessing the memory address store in. Store Word Instruction In order to make the LEDs output a value, we will need to use the store word instruction. This instruction takes 2 registers, and store the value of one inside of the other. In the case of LEDs, it will copy the value from one register into the memory address store in the other register. Beginning control flow Control flow is vital to how a computer program operates. We are familiar with them in languages such as C or java via if/else statements and methods/procedures/subroutines/functions. The first instruction for controling the flow of a program to learn in MIPS is the jump instruction. Think of it as making a function that is going to return void. Also, an important note, it will not leave the jump instruction to resume it's position in the program where it left off prior to the jump. Think of it as working in one direction. Here is some psuedo code that will hopefully explain it. In the pseudo code, we have a main function and two other functions. When executing this, we will set x equal to 10. Followed by jumping to the instructions in 'add-1'. Inside of 'add-1', we will add 1 to the value stores in x. The next code that will be executed in assembly will be the code inside of the function 'sub-1'. The code does not jump back up to main and execute set y equal to 12. Labels Before we begin jumps, we must learn how to define a "function." We simple name the function then follow it with a colon. It is the same as to how I defined main, add-1, and sub-1 in the pseudo code above. Jump statement The jump statement has a simple syntax. j my_label Sample code: LED output in order As you may notice from the code, this program will loop.
    1 point
  43. I just read this article which says that malicious backdoors have been found in at least 11 different Ruby libraries that have been downloaded a total of at least 3,584 times. A user was able to re-upload malicious versions of the libraries to RubyGems. Obviously, if you are a Ruby developer, you should make sure that your versions of these libraries are clean. The exploits send sensitive information from the host computer to a compromised computer in Ukraine according to the article. Much more interesting is this github issue where they discuss the malicious code. It shows that specifically for the rest-client library (the most downloaded malicious library) had an additional line which opened a pastebin file containing additional malicious code.
    1 point
  44. For one of my classes this semester, we had a week where we took a quick look into PROLOG. This language is probably not what most people are used to working with. It follows the logic paradigm, unlike LISP that is functional or C that is imperative. The logic paradigm is an interesting one with much of their uses being in artificial intelligence. As my instructor said, the point is to get rid of programming all together. The mechanics of the language are simple. It is written using three main parts that comprise of a program; facts, rules, and queries. The programmer writes a set of facts and rules. When interpreted, the interpreter will construct a deductive database. From that database, a user will submit queries and the computer will respond with an answer. Essentially, based off of the deductive database, the computer will "figure out" the answer to the query on it's own. Before sharing some sample code, I will provide a few resources. For computers of all platforms, a popular interpreter for PROLOG is swi-prolog. WIth a quick google search, swi-prolog also provides a website where it can be used in browser. If installing the interpreter on one own's computer, a prolog file that constructs the deductive database ends in a ".pl" extension. Also, on my linux system, once swi-prolog is installed, the interpreter is started by typing "prolog" into the terminal. To be able to invoke the interpreter with a ".pl" file, start the prolog interpreter, then submit a query of "consult(FileNameWithoutExtension)." Statements in prolog are based around predicates and are to be thought about in english. An example of a fact would be a dog is a pet. Predicates have a relationships to objects. In prolog terms, dog is an object. Stating this fact in prolog would look like this: pet(dog). When this file is user in the interpreter and we submit a query asking if a dog is a pet, this is what it will look like. ?- consult(ex1). true. ?- pet(dog). true. To exit the interpreter, input the following line into the interpreter. halt. Now, rules are based around if-statements. If statements in prolog are represented as ":-". Look at the following example. pet(dog). dog(sparky). owner(tux). owns(tux, sparky) :- dog(sparky), pet(dog), owner(tux). We have defined three facts. The last line is stating a rule. We are saying, tux owns sparky IF sparky is a dog AND a dog is a pet AND owner is tux. Commas (,) in prolog are representative of AND. Running this in an interpreter with some queries. ?- consult(ex2). true. ?- owns(tux, sparky). true. ?- owns(me, sparky). false. If we ask if tux owns sparky, we get true as the output. However, if we ask if me owns sparky, is returns false. Now to make it a little more complex. father(tom). mother(lisa). boy(chandler). girl(mila). father_of(X, Y) :- father(X), boy(Y); girl(Y). mother_of(X, Y) :- mother(X), boy(Y); girl(Y). This this script, a mother, father, boy and are defined. We are setting rules asking who is the father of who and who is the mother of who. We can query this and get responses. A quick not is that the semi-colon (;) represent OR. So we are saying, X is the Father_of Y if x is a father and y is a boy or y is a girl. Variables in PROLOG start with captical letters. ?- consult(ex3). true. ?- father_of(tom, lisa). false. ?- father_of(tom, mila). true. ?- mother_of(lisa, tom). false. Now, we can also make some queries that will return something interesting. ?- father_of(tom, WHO). WHO = chandler ; WHO = mila. ?- mother_of(who, mila). false. ?- mother_of(WHO, mila). WHO = lisa. By using WHO as a parameter in a query, it will show who all are the children of tom. This is the conclusion of just a quick showing of PROLOG. I was eager to post this becuase it doesn't seem like a whole lot of people are too familiar with the logic paradigm.
    1 point
  45. Nice. I haven't looked at C in a while, so I really enjoyed reading your code and re-familiarizing myself with it. I never went very far with C so your program actually taught me a few things too. As far as I can tell, your program fails to account for letters that are uppercase, so you might want to change it to: #include <ctype.h> //... if (tolower(ch) == list[i].l) { list[i].count++; total_chars++; } I also believe that the part directly after: else { continue; } Is unnecessary because the program would naturally continue the loop regardless. I don't think that this affects run time or anything, it just seemed strange to me. It might also be worth noting that "Total characters" is only the total number of alphabetical characters, but that was almost certainly your intention. calc_percent() also doesn't calculate the percentage, it calculates the ratio, so it's a bit of an odd function name. I did have one question about the program. In the case that the program fails to generate the list you terminated it using "return -1" and in the case that it fails to open the file you used "exit(1)". Was there a reason for this difference? I've also always really liked trying to make programs as fast as possible even if just by entirely negligible amounts, so if you're like me then you could also make the following changes: In instances when list.l is equal to 'a' your program still checks the next 25 letters of the alphabet unnecessarily, so you could add a break command to terminate the loop early like this: if (tolower(ch) == list[i].l) { list[i].count++; total_chars++; break; } and if you really want to be obnoxious like me, then you could then actually change the initialization of your list variable to this: char charList[] = {'e', 't', 'a', 'o', 'i', 'n', 's', 'h', 'r', 'd', 'l', 'c', 'u', 'm', 'w', 'f', 'g', 'y', 'p', 'b', 'v', 'k', 'j', 'x', 'z', 'q'}; because that's the order of how frequently each character appears in English text on average. That would, however, make the formatting uglier because it wouldn't display the results in alphabetical order anymore.
    1 point
  46. I found myself playing Wikigolf with some friends the other day. If you don't know, the game involves trying to get from a specified staring Wikipedia page to a specified ending page by only clicking links to other Wikipedia pages in as few clicks as possible (like golf). For example, if you were tasked to get from "Apple" to "Banana" then your route might be (Apple -> Fruit -> Banana) which would get you a score of 2. An inherent problem with this game is that you have no real way of knowing if your score is any good. Perhaps it took you 7 clicks to get from Jeffrey Dahmer to McDonald's. That may seem like a decent score considering that the two topics are seemingly very divorced from each other, but how can you be sure? What is the lowest possible number of clicks that you could have gotten? Worry not! I have written a program which can tell you just that! (Interestingly enough, you can get from Jeffrey Dahmer to McDonald's in only 2 clicks). #!/usr/bin/perl -w use strict; use LWP::Simple; my(@arr1, @arr2, @pages, $start, $end, $contents, $valid, $url); do{ $valid = 1; print "Enter the starting page (Eg. \"apple\"): "; chomp($start = lc(<stdin>)); print "Enter the ending page: "; chomp($end = lc(<stdin>)); $contents = get("https://en.wikipedia.org/wiki/".$start); if(!(defined $contents)){ print "\nError: ".$start." is not a valid wikipedia page.\n"; $valid = 0; } $contents = get("https://en.wikipedia.org/wiki/".$end); if(!(defined $contents)){ print "\nError: ".$end." is not a valid wikipedia page.\n"; $valid = 0; } }while($valid == 0); @pages = ($start); @arr1 = ($start); for(my $hits = 1;; $hits++){ for(my $i=0; $i<scalar(@arr1); $i++){ $contents = get("https://en.wikipedia.org/wiki/".$arr1[$i]); if(!(defined $contents)){ next; } while($contents =~ /href=\"(.*?)\"/g){ $url = lc($1); if(substr($url,0,6) eq "/wiki/"){ $url = substr($url,6); if( $url =~ /file\:/ or #Exclusions $url =~ /special\:/ or $url =~ /talk\:/ or $url =~ /wikipedia\:/ or $url =~ /category\:/ or $url =~ /help\:/ or $url =~ /portal\:/ or $url =~ /template\:/ or $url =~ /main_page/ or grep( /^\Q$url\E$/, @pages)){ #If we've already seen that page. next; } if($url eq $end){ print "The minimum number of hits is: ".($hits); exit(0); } push(@pages, $url); push(@arr2, $url); } } } @arr1 = @arr2; @arr2 = (); } Unfortunately a limitation of this program is that while it does tell you the minimum number of clicks, it does not tell you what those clicks are exactly. This is because it lumps all pages together based on what level they are on irrespective of what page led to them. This image should demonstrate what I mean: Be warned! The program is slow. This is because many Wikipedia pages link to 500+ other wikipedia pages. Therefore even for cases where only 2 clicks are necessary, the program may still look at the contents of over 250,000 unique wikipedia pages.
    1 point
  47. In preparation for the CTF series I want to work on I've been watching some videos of other people's processes. I highly encourage everyone to watch this one and appreciate its subtleties. https://youtu.be/_m_LY7JO9MM For those of you that didn't watch it (go back and watch it), the guy in the video has a vulnerable image from vulnhub.com called "pandora". He runs a portscan on the image and finds that the admin left himself a backdoor on one of the ports. after connecting to the port it prompts for a password. Mr. Hacker then tries all the common passwords (sex, secret, god, lol), and then wonders to himself: "Hmm... is this vulnerable to a timing attack". The rest of the video is of him coding a short python script to exploit the port. This was the first time I've ever heard of a timing attack, and I have knowledge of most vulnerabilities. So at first I wondered if he was either (1 a genius, or (2 had prior knowledge and wanted to look cool. Turn out though that this problem falls into an entire subclass of exploits of the genre "timing attack", and this one, in particular, is possible because of comparison functions running in O(n) time. What is a timing attack? On wikipedia that certainly sounds complex, and I'm sure in some cases it can be fairly complex. But for our purposes all you need to understand is that: Computers execute instructions to get things done Computers take time to execute those instructions Depending on how long it takes we can make assumptions Meat and Potatoes of this simple magic trick Every password comparison function at some point needs to compare the inputted string, to its saved value for the password. Here's some psuedo code for how this works: for letter1 in input_string: for letter2 in correct_stored_password: if letter1 != letter2: return False return True Notice that on the first letter that doesn't match, it goes ahead and returns False. No point in wasting time right? This is actually the problem. Don't be distracted by the for loop. this function does indeed run in O(n) time where 'n' is the length of the string. One problem: 'n' doesn't equal the length of the string, it equals the number of letters in the string that are correct because otherwise the function immediately returns. And this is how the timing attack works. While checking the password, the computer will look at the first letter, and compares it to its stored value. If it's incorrect it will display invalid password and return, but if the first letter is correct it must at least check one more letter before returning! Hence the closer your guess, the further the computer looks at you string, and the longer it should take. Note: in the video the guy actually does the opposite: he immediately starts looking for a letter that is rejected faster: characteristic of an invalid letter. He was not the first person to work on this image, and someone else had already posted their results first (you can no longer find a link). In reality, when looking for timing attacks you perform take an average of all the compute times and find an outlier. And outlier is more useful than just looking for what takes a longer or shorter amount of time because you don't know what the backdoor is written in, and you don't know how the code is optimized. What's more, this is the only way to detect if the code is doing hashing behind the scenes. In other words, this dude cheated for views and made you watch 30 mins of him being shitty at python before trying to wow you with his magic trick. Now back to the post. When Mr. Hacker decided that he would use a timing attack you could say he was using some lateral thinking. Would an administrator lazy enough to write a backdoor instead of setting up ssh actually take the time to hash the function? The answer here is no. A timing attack such as this one wouldn't have worked if the password was hashed before comparison because, in all the commonly secure hashing functions, slight changes in input have drastic effects on the output. Consider this image: Even changing one letter completely changes the resulting hash. So this makes a timing attack impossible. Conclusion: This attack should work everywhere a linear comparison function is used and is not offset by a confounding function like a hash. This includes not just passwords prompts but also guessing games, finding cheat codes etc...
    1 point
  48. I'd actually like to have HaxMe become a potential Luminary || Innovator level donor in the future. https://www.eff.org/thanks
    1 point
  49. Offensive Security has some of the most, if not the most, respected certifications in the industry. It differs from other certs, like CEH, in that instead of providing a knowledge that's a mile wide and an inch deep, it gives you hands-on drills and practice. Unfortunately, the program is also quite costly. If you can learn the whole thing proficiently in 30 days, you're looking at $800.00 for the OSCP alone. Ultimately, if you want the cert. You're going to have to pay. In the meantime, I want to write about how you can be acquiring the same skills now for cheap and in some cases free. And maybe at some point down the line, you won't need all 30 days to get the parchment. I'll also add that, while this is a damn decent set of certs and courses, it's not comprehensive heavily relies on you to do your own research. supplementary certs such as CCNA or L-PIC will be needed for example to elaborate on any particular concept (tis why I'm studying CCNA currently). Disclaimer: I haven't obtained any of these certs myself yet, so I'm offering general information only. Certifications: OSCP link: https://www.offensive-security.com/information-security-certifications/oscp-offensive-security-certified-professional/ Description: Learning Goals: Identify Existing Vulnerabilities Execute organized attacks Write simple Bash and/or Python scripts Modify existing exploit code to their advantage perform network pivoting perform data exfiltration compromise poorly written PHP applications. Keep going until you win. Official Training guide: Penetration Testing with Kali Linux (PWK) Talking Point: The first bullet in the learning goals is very important to note. By the end of the course you're not going to be writing custom exploits. The entire point of it is to get good at using what's already available in an efficient and creative way. You won't be developing custom exploits because that isn't the point. The point is to practice using the cyber killchain, with success, until it's burned into your brain. You won't learn until you get that positive feedback - the shot of dopamine from actually compromising a box. Focusing on the details at this level would only slow down your learning. So at this level just use other people's exploits and tools. Substitutes/freebies: Training guide + video series (2014): https://thepiratebay.org/torrent/20152226/Offensive-Security__PWK__Penetration_Testing_with_Kali free Practice labs (vmware): https://www.vulnhub.com/ more labs specifically concentrating on learning Linux system hierarchy and common commands: http://overthewire.org What I can't find is a viable substitute for simulating an actual network. And you're not always going to be testing from the same subnet as the target. OSCE link: https://www.offensive-security.com/information-security-certifications/osce-offensive-security-certified-expert/ Description: Learning Goals: Obtain shell from basic web application attacks such as xss and directory traversal. Modifying executable files with custom shellcode on windows. Avoid AV dealing with ASLR Finding possible 0days using fuzzing techniques then developing an exploit. Official Training guide: Cracking the Perimeter (CTP) Talking Point: The actual course description on this isn't too indicative of what you learn in the course. So I did my best to extract the learning goals from a 2012 CTP manual. This course gets quite a bit more advanced and actually relies way more on the individuals' ability to do their own research. That said I'm actually unconvinced of this courses usefulness outside of its case studies. The shellcoders handbook is a thousand page tome that elaborates way more on all of these topics. We're actually getting into the hard security research/computer science realm with this. for labs in this course I'd recommend finding exploits on exploit-db or packetstorm and try setting up a debug environment yourself, fuzz (this is the analog of recon at this level), and write your own version of the exploit. Compare your code to the POC code. Substitutes/freebies: OLD training guide(2012): https://thepiratebay.org/torrent/7483548/Offensive_Security_-_BackTrack_to_the_Max_Cracking_the_Perimeter Shellcoders handbook: http://index-of.es/Varios/Wiley.The.Shellcoders.Handbook.2nd.Edition.Aug.2007.ISBN.047008023X.pdf OSWE link: https://www.offensive-security.com/information-security-certifications/oswe-offensive-security-web-expert/ Description: Learning Goals: fingerprint web applications identify vulnerabilities found exploit vulnerabilities write a report about it Official Training guide: Advanced Web Attacks and Exploitation (AWAE) Talking Point: This is the OSCP equivalent to web applications. Not much in the way of crafting your own exploits. Fortunately, new exploits found in web applications tend to be rehashes of other common vulnerabilities, so with webdev experience, it starts to become intuitive anyway. Labs to find for web app attacks are everywhere. so this is the easiest to learn the basics of. Substitutes/freebies: Web application hacker's handbook (Huge comprehensive tome): https://leaksource.files.wordpress.com/2014/08/the-web-application-hackers-handbook.pdf lab (courtesy of mls577): https://www.owasp.org/index.php/OWASP_Mutillidae_2_Project lab: hackthissite.com OSWP link: https://www.offensive-security.com/information-security-certifications/oswp-offensive-security-wireless-professional/ Description: Learning Goals: Conduct wireless information gathering. Circumvent wireless network access restrictions. Crack various WEP, WPA, and WPA2 implementations. Implement transparent man-in-the-middle attacks. Demonstrate their ability to perform under pressure. Official Training guide: Offensive Security Wireless Attacks (WiFu) Talking Point: This is a narrow topic that only covers wi-fi (no discussions of bluetooth for example). Substitutes/freebies: Course manual (2012): https://thepiratebay.org/torrent/20152240/Offensive-Security_-_OSWP_-_WiFu OSEE link: https://www.offensive-security.com/information-security-certifications/osee-offensive-security-exploitation-expert/ Description: Learning Goals: reverse engineering assembly/disassembly Develop sophisticated exploits. Create custom shellcode. Evade DEP and ASLR protections. Exploit Windows kernel drivers. Perform precision heap sprays. Official Training guide: Advanced Windows Exploitation (AWE) Talking Point: this course is only available live by attending blackhat here in Vegas. Need an expert to make a recommendation for this one. Substitutes/freebies: OLD course manual (2012. this is so freakin dated) https://thepiratebay.org/torrent/7835702/Offensive_Security_-_Advanced_Windows_Exploitation_(AWE)_v_1.1 A free course from Offensive security: Metasploit Unleashed. https://www.offensive-security.com/metasploit-unleashed/
    1 point
  50. ExploitDB, Offensive Security's Exploit Database Archive is an amazing resource. Be it for google dorks, exploits, shellcode, or technical papers. https://www.exploit-db.com/ Want to be able to search for exploits offline, or via terminal? Check out the following, a few simple commands will arm you with the entire DB! https://www.exploit-db.com/searchsploit/
    1 point
×
×
  • Create New...