Skip to main content

Counterfeiter nabbed while trying to return printer with fake money inside

Walmart_exterior
Image used with permission by copyright holder

Covered by the Chippewa Herald in Wisconsin recently, 37-year-old Jarad S. Carr visited a local Walmart in Lake Hallie, Wisconsin last week in an attempt to return an inkjet printer without any proof of purchase. However, the Walmart employees refused to take the printer back and Carr started to argue with the employees hoping to get a portion of the purchase price back. Upon closer inspection of the printer during the ongoing argument, the Walmart employees discovered a single sheet of paper with two counterfeit $100 bills printed on it. In addition, the sheet of paper had a hole cut out of it in the shape of a third fake bill.  

Jarad S Carr Walmart Counterfeiter At this point, the Walmart employees at the return desk discretely placed a phone call to the local police while Carr remained agitated about the lack of cooperation and refused to leave the store without payment.

As the employees stalled Carr, two Lake Hallie officers quickly arrived at the Walmart location and attempted to question Carr about the counterfeit bills within the printer. Carr refused to answer any questions and immediately attempted to resist officers after being told that he was being placed under arrest. After Carr was safely taken into custody, the police officers discovered three additional counterfeit $100 bills when searching Carr’s clothing.

Carr was arrested for attempted theft by fraud and forgery as well as resisting arrest. Not surprisingly, Carr was already wanted on two felony warrants for armed robbery and burglary in a nearby county. In an interview about the crime, Lake Hallie Police Chief Cal D. Smokowicz told NBC NewsYou go to a Walmart with a printer to return and no receipt, with your counterfeit bills still lodged in it, and you want to dicker with the clerks to get half price back, when you have warrants out for you. There are a few lessons here about not drawing attention to yourself.”

Mike Flacy
By day, I'm the content and social media manager for High-Def Digest, Steve's Digicams and The CheckOut on Ben's Bargains…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more