Skip to main content

Portable security router InvizBox Go raises €100,000 in crowdfunding campaign

Irish tech start-up InvizBox has successfully raised over €100,000 ($115,000) in its Kickstarter campaign to fund to its new portable VPN router, the Invizbox Go.

The InvizBox Go is open source and promises to protect you while connected to a public Wi-Fi source by routing your traffic through Tor or a VPN connection for added security. It requires no software, has an optional ad blocker, and can also be used to access geo-restricted websites much like a regular VPN.

It has partnered with a VPN software provider to bulk up its privacy features, but has declined to name which VPN company just yet.

InvizBox funded its original InvizBox router last year on Indiegogo, raising $20,000, targeting the device at home users rather than for public connections.

Unlike its predecessor, the InvizBox Go does not require an Ethernet connection. The start-up says it will now use the funds to further develop its prototype by working on the battery, firmware, and circuitry, and promises to ship the device in early 2016.

There have been a number of routers that have sprung up on crowdfunding sites in the last year or so, and they’ve eventually given in to scrutiny. Last year, the Anonabox hardware was pulled from Kickstarter following claims that it was over-promising anonymity and its part weren’t custom made as advertised.

InvizBox, on the other hand, is completely open source and the creators have said that they have spoken with security penetration testing company Mandalorian about carrying out a full security audit on the InvizBox go once it is complete.

Incentives for backing the device included prices of around $90-$100 dollars which included a year’s VPN subscription but it’s not yet known how much the device will cost once it’s generally available.

Jonathan Keane
Former Digital Trends Contributor
Jonathan is a freelance technology journalist living in Dublin, Ireland. He's previously written for publications and sites…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more