/etc/resolv.conf
’s nameserver
s. I wanted to avoid getting any from the Router, even though I wanted an IPv6 address through SLAAC.
Since the server runs Ubuntu, netplan is the default network configuration tool:
network:
ethernets:
eno1:
addresses:
# static IPv4 address
- 10.0.10.10/24
routes:
# static IPv4 gateway
- to: default
via: 10.0.10.1
metric: 100
on-link: true
# IPv6 gateway comes via RA
nameservers:
addresses:
- 10.0.10.20
# always use FQDN as short names clash with SSL
search: [""]
# disabled by default
# dhcp4: false
# dhcp4-overrides:
# use-dns: false
# use-domains: false
# IPv6
# We want IPv6 to be configured by RA only
accept-ra: true
# dhcp6: false
# dhcp6-overrides:
# use-dns: false
# use-domains: false
ipv6-privacy: false
renderer: networkd
version: 2
One sudo netplan apply
later and we have
However, we have one issue. We’re getting a IPv6 DNS address pushed…:
> cat /etc/resolv.conf
# ...
nameserver 10.0.10.20
nameserver 2600:xxxx:xxxx:xxxx::1
search .
Parsing netplan’s reference yielded some reference to stateless configuration here. Time to try it out.
Let’s switch it on.
network:
ethernets:
eno1:
# ...
# IPv6
# We want IPv6 to be configured by RA only
accept-ra: true
dhcp6: true
dhcp6-overrides:
use-dns: false
use-domains: false
ipv6-privacy: false
# ...
Running > sudo netplan apply && sudo systemctl restart systemd-resolved.service
and …
> cat /etc/resolv.conf
# ...
nameserver 10.0.10.20
nameserver 2600:xxxx:xxxx:xxxx::1
search .
Still there… What’s going on?
Digging deeper reveals that netplan is an abstraction tool over systemd.network. Reading through its configuration reference something caught my attention:
IPv6AcceptRA=
Notice the reference to an [IPv6AcceptRA]
section
And this rang a bell… The DNS address that I got pushed comes from my router’s rDNS.
So how do we configure this with netplan? We cannot. If we look in netplan’s code for IPv6AcceptRA
which only yields the following result:
Eventually I ended up setting it manually in the networkd configuration:
> cat /run/systemd/network/10-netplan-eno1.network
[Match]
Name=eno1
[Network]
DHCP=ipv6
LinkLocalAddressing=ipv6
Address=10.0.10.10/24
IPv6AcceptRA=yes
DNS=10.0.10.20
[Route]
Destination=0.0.0.0/0
Gateway=10.0.10.1
GatewayOnLink=true
Metric=100
[DHCP]
RouteMetric=100
UseMTU=true
UseDNS=false
UseDomains=false
# vvvvvvvv
[IPv6AcceptRA]
UseDNS=false
UseDomains=false
# ^^^^^^^^
No more sudo netplan apply
. This time we do sudo systemctl restart systemd-networkd.service && sudo systemctl restart systemd-resolved.service
.
Checking /etc/resolv.conf
one more time:
> cat /etc/resolv.conf
# ...
nameserver 10.0.10.20
search .
Bingo!
After some more searching I did stumble upon an existing bug report on launchpad. At least I’m not the only one.
I removed the config for netplan. That way it cannot overwrite the network configuration that I edited.
-Kristof
]]>Initially, we had a network for the computers at home, 192.168.25.0/24
. (Of course IOT sits in another network).
The Wireguard VPN device sat on 192.168.25.20
. And since it uses NAT, and that doesn’t allow me to track individual peers at the network level. All traffic coming over the Wireguard VPN would look like it originates from THAT IP.
Time to split that up.
So what are the steps that we need to do?
Note: I contemplated keeping my Wireguard clients in the same subnet, but that would mean that EVERY device in that subnet would need to get static routes to the Wireguard clients. Whereas if I move them to a separate subnet, only the router needs to get the static route.
I chose to use 192.168.30.0/24
for all my Wireguard clients.
Our Wireguard server sits at 192.168.25.20
, so on the router we add the static route:
The default wg0.conf
needs to be updated to look like this:
[Interface]
Address = 192.168.30.1
ListenPort = 51820
PrivateKey = ##############################################
[Peer]
# peer_iPhoneKristof
PublicKey = ##############################################
AllowedIPs = 192.168.30.10/32
So what did we change?
PostUp
and PostDown
, as these are only needed when we do NAT (i.e. Masquarade)Interface
’s address to be .1
in our new range..10
in our new range.Lastly, the underlying server needs 2 changes:
In /etc/sysctl.conf
(or wherever it is on your flavor of Linux):
# Enable IPv4 packet forwarding
net.ipv4.ip_forward=1
# Enable Proxy ARP (https://en.wikipedia.org/wiki/Proxy_ARP)
net.ipv4.conf.all.proxy_arp=1
We need to make sure that the client now connects with the right IP, and that the client’s AllowedIPs
are set up to target our ORIGINAL range:
[Interface]
PrivateKey = ##############################################
ListenPort = 51820
Address = 192.168.30.10
DNS = 192.168.25.5 # adguard sits here
[Peer]
PublicKey = ##############################################
AllowedIPs = 192.168.25.0/24
Endpoint = my.endpoint.com:51820
We now can track individual Wireguard clients on our network.
]]>let result =
try
let request =
"https://somewebsite/with/expired/ssl/certificate/data.json?paramx=1¶my=2"
|> WebRequest.Create
let response =
request.GetResponse ()
// parse data
let parsed = "..."
Ok parsed
with
| ex ->
Error ex
If we execute this, then result would be of Error with the following exception:
ex.Message
"The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel."
ex.InnerException.Message
"The remote certificate is invalid according to the validation procedure."
So how do we fix this?
The solution is to set the following code at startup of the application (or at least before the first call):
ServicePointManager.ServerCertificateValidationCallback <-
new RemoteCertificateValidationCallback(fun _ _ _ _ -> true)
Notice that you should not do this, because this does not validate the certificate at ALL! Also, this is for ALL calls, if you want to do it on a specific call you need to do make some changes.
First of all, it doesn’t work with WebRequest.Create
, you need to use WebRequest.CreateHttp
, or cast the WebRequest
to HttpWebRequest
, as the property we need, ServerCertificateValidationCallback
is not available on WebRequest
, only on HttpWebRequest
. The resulting code looks like this:
let request =
"https://somewebsite/with/expired/ssl/certificate/data.json?paramx=1¶my=2"
|> WebRequest.CreateHttp
request.ServerCertificateValidationCallback <- new RemoteCertificateValidationCallback(fun _ _ _ _ -> true)
let response =
request.GetResponse ()
Again, don’t do this in production!
If need be, do it on a single HttpWebRequest
, like the last example, and write some code so that you ignore the expiration part, but leave in place the validation part.
Code on Github!
]]>In order to prevent issues like this we looked at npm shrinkwrap. This writes a file called npm-shrinkwrap.json
which ‘freezes’ all of the package-versions that are installed in current project.
Now this is dangerous, as we just found out.
The issue is that when an author decides to delete a package from the feed. Delete you ask? Yes, delete, gone, no trace, nothing.
What’s the issue you might ask? You’ll find it next time?
Not really.
Imagine you’re using Elastic Beanstalk, which, based on certain triggers, can spawn a new instance of a server, or delete one.
Now today you release your application to your servers, and you shrinkwrap
your packages.
Before you do that, you obviously clear your local npm cache (located in %appdata%\npm-cache
), and your local node_modules
. Then you do an npm install
to verify every package is correctly installed, you do a few test runs, maybe on a local server. Then you package and send it of to AWS.
All runs well, you’re happy, and your boss is happy.
Next week, for whatever reason, you get a high load on your servers. Elastic Beanstalk decides to add one more instance.
And then stuff starts to break. You get emails that its health is degraded. Then you get emails that its health severe.
At 2a.m. you open your laptop, and you start looking at the logs. There you find something in the lines of:
npm ERR! Linux 3.14.48-33.39.amzn1.x86_64
npm ERR! argv "/usr/bin/iojs" "/usr/bin/npm" "install"
npm ERR! node v2.4.0
npm ERR! npm v2.13.0
npm ERR! version not found: node-uuid@1.4.4
npm ERR!
npm ERR! If you need help, you may report this error at:
npm ERR! <https://github.com/npm/npm/issues>
npm ERR! Please include the following file with any support request:
npm ERR! /app/npm-debug.log
What? You tested locally? What happened?
Okay, you fire up a console window. You make a test dir. You run npm installnode-uuid@1.4.4
.
All goes well. Or does it?
Let’s look at the output:
C:\__SOURCES>mkdir test
C:\__SOURCES>cd test
C:\__SOURCES\test>npm install node-uuid@1.4.4
npm http GET https://registry.npmjs.org/node-uuid/1.4.4
npm http 404 https://registry.npmjs.org/node-uuid/1.4.4
node-uuid@1.4.4 node_modules\node-uuid
Notice the 404
? I didn’t… But it’s important!
Now here’s what happened: locally I had node-uuid@1.4.4
in my cache, so he took that one, even though the package disappeared from the registry.
However: my new instance on Elastic Beanstalk didn’t. That’s why it failed.
So, solutions:
shrinkwrap
. Stuff might break in the future, as authors delete packagesWe had an object with some properties that we wanted to update, but only if a certain property of that object is not set, i.e. it should be null.
{
"Id": 1, // Id is the HashKey
}
In this case we wanted to update the object with Id 1, and set an attribute called Foo
to "Bar"
To do this I wrote the following Javascript, using the aws-sdk:
function updateObject(id) {
var dynamodb = new AWS.DynamoDB();
dynamodb.updateItem({
Id: id
}, {
UpdateExpression: "SET Foo = :value",
ExpressionAttributeValues: {
":value": "Bar"
},
ConditionExpression: "attribute_not_exists(Foo)"
}, function(error, data) {
if(error) {
// TODO check that the error is a ConditionalCheckFailedException, in
// which case the Condition failed, otherwise something else might be off.
console.log("Error");
} else {
console.log("All good, we've updated the object");
}
}
);
}
Perfect!
Now assume we have have a range of 1 -> 12 in our table, where half of them already have the Foo
attribute, so we should get 50% Error
, and 50% All good, ...
(which is the case).
However, what do we expect when we update an item with Id 13?
When I, in my mind, which talks (used to) talk SQL when thinging about a database, updating something that is not there, doesn’t do anything.
Consider the following table:
CREATE TABLE Test(
Id INT NOT NULL,
Foo NVARCHAR(255) NULL
)
With the following query:
INSERT INTO Test (Id, Foo) VALUES (1, NULL), (2, N'Bar'), (3, NULL)
GO
--SELECT * FROM Test
--GO
UPDATE Test SET Foo = 'Bar' WHERE Id = 1 AND Foo IS NULL
IF @@ROWCOUNT = 1
BEGIN
SELECT N'1 updated, set Foo to Bar'
END
ELSE
BEGIN
SELECT N'1 not updated, Foo was already set'
END
GO
--SELECT * FROM Test
--GO
UPDATE Test SET Foo = 'Bar' WHERE Id = 2 AND Foo IS NULL
IF @@ROWCOUNT = 1
BEGIN
SELECT N'2 updated, set Foo to Bar'
END
ELSE
BEGIN
SELECT N'2 not updated, Foo was already set'
END
--SELECT * FROM Test
--GO
UPDATE Test SET Foo = 'Bar' WHERE Id = 7 AND Foo IS NULL -- 7 Doesn't exist!
IF @@ROWCOUNT = 1
BEGIN
SELECT N'7 updated, set Foo to Bar'
END
ELSE
BEGIN
SELECT N'7 not updated, because 7 doesn''t exist!'
END
This will print, along with some empty result sets, the following:
1 updated, set Foo to Bar
2 not updated, Foo was already set
7 not updated, because 7 doesn't exist!
Now, that knowledge in SQL doesn’t apply to DynamoDb.
While testing on some non-existing values we saw that our code passed the testcases perfectly. That’s not how it should be.
Let’s take a look again at the documentation, this time do actually read the first line:
Edits an existing item’s attributes, or adds a new item to the table if it does not already exist.
(emphasis mine).
So we need to guard ourselves against updates on non-existing items? How do we do that? Let’s extend our ConditionExpression
. Start by taking the original code, and change the ConditionExpression
as highlighted:
function updateObject(id) {
var dynamodb = new AWS.DynamoDB();
dynamodb.updateItem({
Id: id
}, {
UpdateExpression: "SET Foo = :value",
ExpressionAttributeValues: {
":id": id,
":value": "Bar"
},
// make sure the object we're updating actually has
// :id as Id, the side-effect of this is that if none of those
// is found, it will throw a ConditionalCheckFailedException
// which is what we want
ConditionExpression: "Id = :id AND attribute_not_exists(Foo)"
}, function(error, data) {
if(error) {
// TODO check that the error is a ConditionalCheckFailedException, in
// which case the Condition failed, otherwise something else might be off.
console.log("Error");
} else {
console.log("All good, we've updated the object");
}
}
);
}
One of those things was manually applying Key Policies on Encryption Keys.
It currently looked like this:
Notice the sentence:
We’ve detected that the policy document for this key has been manually edited. You may now edit the document directly to make changes to permissions.
This gives a lot of issues, for example, you cannot view grants anymore through the UI, nor can you easily add & remove Key Administrators. While the API allows you to modify the grants, that wasn’t enough for simple changes we’d like to make when testing / operating our products.
Because of the fact that you cannot delete nor reset keys in AWS, you have to find another way.
So I do have another key that shows me the UI I want, where I can modify Key Administrators and Key Usage.
So, what do we do then? We fetch the correct policy from a key that shows the correct UI and see whether we can apply it to our ‘broken’ key, and see if it works. (Spoiler, it does).
Should you not have a ‘working’ key (as described next), and don’t want to create a new one for the sake of doing this (you can’t delete a key, so I completely understand), click here to scroll down to the correct policy.
First, let’s get the ARN of a working key, just navigate to the Encryption Key section in the IAM Management console, set your region and select your key, and copy the ARN:
So, how do we get that correct policy? Let’s use Python with boto3.
First of all we make sure we have an account in
%userprofile%\.aws\credentials
If you don’t please follow the steps here.
Next up is ensuring we have boto3 installed. Fire up a cmd window and execute the following:
pip install boto3
When that’s done, we can open Python, and that key for its policy.
import boto3
kms = boto3.client("iam")
policy = kms.get_key_policy(KeyId="THE ARN YOU JUST GOT FROM A WORKING KEY", PolicyName="default")["Policy"]
print policy
2 things here:
Do paste in the correct ARN!
Why default
as policy name? That’s the only one they support.
That policy is a JSON string. It’s full of \n
gibberish, so let’s trim that out (in the same window, we reuse that policy
variable):
import json
json.dumps(json.loads(policy))
Which should give you this beautiful JSON document:
'{"Version": "2012-10-17", "Id": "key-consolepolicy-2", "Statement": [{"Action": "kms:*", "Principal": {"AWS": "arn:aws:iam::************:root"}, "Resource": "*", "Effect": "Allow", "Sid": "Enable IAM User Permissions"}, {"Action": ["kms:Describe*", "kms:Put*", "kms:Create*", "kms:Update*", "kms:Enable*", "kms:Revoke*", "kms:List*", "kms:Get*", "kms:Disable*", "kms:Delete*"], "Resource": "*", "Effect": "Allow", "Sid": "Allow access for Key Administrators"}, {"Action": ["kms:DescribeKey", "kms:GenerateDataKey*", "kms:Encrypt", "kms:ReEncrypt*", "kms:Decrypt"], "Resource": "*", "Effect": "Allow", "Sid": "Allow use of the key"}, {"Action": ["kms:ListGrants", "kms:CreateGrant", "kms:RevokeGrant"], "Resource": "*", "Effect": "Allow", "Condition": {"Bool": {"kms:GrantIsForAWSResource": true}}, "Sid": "Allow attachment of persistent resources"}]}'
(!) Notice the single quotes in the beginning and the end. You DON’T want those. Also notice that I’ve removed my Account Id (replaced by asterisks), so if you’re just copy pasting, make sure you replace them by your own Account Id, which you can find here (middle, Account Id, 12 digit number).
Now let’s go to our broken key again, and in the policy field we paste in our just-retrieved working policy.
Hit the save button, and lo and behold, we revert back to the original UI.
Succes!
]]>This means that if you get a new laptop, or a new member joins the team, or even when you need to change your Windows password, you just need to run the script again and it will set up everything in the correct locations & with the correct credentials.
The credentials were a problem though.
When installing a Topshelf service with the --interactive
parameter (we need to install under the current user, not System
) it will prompt you for your credentials for each service you want to install. For one, it’s fine, for 2, it’s already boring, for 3, … You get the point.
We initially used the following command line to install the services:
. $pathToServiceExe --install --interactive --autostart
To fix this we will give the $pathToServiceExe
the username and password ourselves with the -username
and -password
. We should also omit the --interactive
.
First gotcha here: When reading the documentation, it says one must specify the commands in this format:
. $pathToServiceExe --install --autostart -username:username -password:password
However, this is not the case. You mustn’t separate the command line argument and the value with a :
.
Now, we don’t want to hardcode the username & password file in the setup script.
So let’s get the credentials of the current user:
$credentialsOfCurrentUser = Get-Credential -Message "Please enter your username & password for the service installs"
Next up we should extract the username & password of the $credentialsOfCurrentUser
variable, as we need it in clear-text (potential security risk!).
One can do this in 2 ways, either by getting the NetworkCredential
from the PSCredential
with GetNetworkCredential()
:
$networkCredentials = $credentialsOfCurrentUser.GetNetworkCredential();
$username = ("{0}\{1}") -f $networkCredentials.Domain, $networkCredentials.UserName # change this if you want the user@domain syntax, it will then have an empty Domain and everything will be in UserName.
$password = $networkCredentials.Password
Notice the $username
caveat.
Or, by not converting it to a NetworkCredential
:
# notice the UserName contains the Domain AND the UserName, no need to extract it separately
$username = $credentialsOfCurrentUser.UserName
# little more for the password
$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($credentialsOfCurrentUser.Password)
$password = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)
Notice the extra code to retrieve the $password
in plain-text.
I would recommend combining both, using the NetworkCredential
for the $password
, but the regular PSCredential
for the $username
as then you’re not dependent on how your user enters his username.
So the best version is:
$credentialsOfCurrentUser = Get-Credential -Message "Please enter your username & password for the service installs"
$networkCredentials = $credentialsOfCurrentUser.GetNetworkCredential();
$username = $credentialsOfCurrentUser.UserName
$password = $networkCredentials.Password
Now that we have those variables we can pass them on to the install of the Topshelf exe:
. $pathToServiceExe install -username `"$username`" -password `"$password`" --autostart
Notice the backticks (`) to ensure the double quotes are escaped.
In this way you can install all your services and only prompt your user for his credentials once!
]]>"no"
will result in false
.
HTML:
<div ng-app>
<div ng-controller="yesNoController">
<div ng-if="yes">Yes is defined, will display</div>
<div ng-if="no">No is defined, but will not display on Angular 1.2.1</div>
<div ng-if="notDefined">Not defined, will not display</div>
</div>
</div>
JavaScript:
function yesNoController($scope) {
$scope.yes = "yes";
$scope.no = "no";
$scope.notDefined = undefined;
}
Will print:
Yes is defined, will display
Let’s read the documentation on expression, to see where this case is covered.
.
.
.
.
Can you find it?
Neither can I.
Fix?
Use the double bang:
// ...
<div ng-if="!!no">No is defined, but we need to add a double bang for it to parse correctly</div>
// ...
JSFiddle can be found here.
For those who care, it’s not a JS thing:
// execute this line in a console
alert("no" ? "no evaluated as true" : "no evaluated as false"); // will alert "no evaluated as true"
For example, see this package (which with its name implies that it is Microsoft, but then says it’s not Microsoft).
At the moment of writing the above-linked package even throws an error when you return a 200 OK without a body…
But in the end, it’s very simple to enable compression on your IIS server without writing a single line of code:
You first need to install the IIS Dynamic Content Compression module:
Or, if you’re a command line guy, execute the following command in an elevated CMD:
dism /online /Enable-Feature /FeatureName:IIS-HttpCompressionDynamic
Next up you need to enable the Dynamic Content Compression to compress
application/json
and
application/json; charset=utf-8
To do this, execute the following commands in an elevated CMD:
cd c:\Windows\System32\inetsrv
appcmd.exe set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/json',enabled='True']" /commit:apphost
appcmd.exe set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/json; charset=utf-8',enabled='True']" /commit:apphost
This adds the 2 mimetypes to the list of types the module is allowed to compress. Validate that they are added with this command:
appcmd.exe list config -section:system.webServer/httpCompression ```
Validate that the 2 mimetypes are there and enabled:
And lastly, you’ll probably need to restart the Windows Process Activation Service.
Best is to do this through the UI because I have yet to find a way in CMD to restart a service (can’t seem to start services that are dependent on the one we just started).
In services.msc you’ll need to search for Windows Process Activation Service. Restart it.
Obviously there are more settings available, take a look at the httpCompression Element settings page.
I recommend reading about 2 at least:
Good luck,
-Kristof
]]>I got a machine in my hands which exhibited the previously mentioned problem. However Tablet PC Settings weren’t installed, so we couldn’t open the tab.
After searching the bowels of the internet I found the following shell shortcut:
shell:::{80F3F1D5-FECA-45F3-BC32-752C152E456E}
Putting this in Winkey+R, or in the Windows 7/8(.1) search box will open the Tablet PC Settings, and if you don’t have a touch screen, will default to the Other tab, where you can change the handedness of your menus!
Have a good one,
-Kristof
]]>