• Welcome to The Network People Support Forums. Please login or sign up.

Using nictool to push dns updates via nsupdate

Started by abeeson, May 01, 2014, 11:21:25 am

Previous topic - Next topic

abeeson

Hey everybody,

I'm looking at nictool at the moment for us to use at work, we are running BIND servers but have a requirement to do nsupdate's into our zones rather than letting nictool manage them directly.

My assumption is the best way to achieve this would be write up a different export method called "nsupdate" but as i have just started looking at this, i thought i might come here first.

Is this something anybody has had a look at before? Any reason why it would be a bad idea? my view here is that I will be ignoring zone creations and deletions (we will be handling that via puppet) and using nictool to do the zone record inserts and deletes. I also plan to do this via the nictool client (cnames, things like that) and via the API from our IPAM product for A records, PTR's etc.

I plan to inject the existence of the zones via the API as well for the reverse subnets, the API seems to support that fairly well.

Ideally the nsupdates would be almost instantaneous after being created but a small delay would be ok as well, obviously i cant have it continously inserting them and i need it to issue the appropriate deletes as well when they are removed...

I'll also have a DDNS key on the changes but i'll either write that key into the export code or use the description field, neither are ideal but i'm trying to avoid making any non-standard changes to prevent me doing nictool updates later....

I'm happy to contribute any work i do on this as well to be integrated into nictool natively if it is wanted.

Thoughts?

matt

May 02, 2014, 07:30:09 am #1 Last Edit: May 03, 2014, 04:27:43 pm by matt
Hello abeeson,

As I understand it, you are going to create and delete zones in NicTool, ignore their export (b/c puppet already created them), and then publish zone record updates via the nsupdate mechanism.

It seems the "most likely to be usable by others"  approach would be to create a new export class (perhaps NicToolServer::Export::BIND::nsupdate) with a configurable set of export objects that you want to suppress (z.add, z.del, zr.ptr, etc.).  Then you could have a fully working export mechanism for a NS via your export class, as well as a customized one for your puppet integration whose changes are merely config objects.

One issue you'll need to resolve is that the NicTool DB schema only updates timestamps on modified *zones*. All the present export mechanisms do a full export for any zone with changes. You wish to only export zone record updates, so your shortest path to finding zr updates might be spelunking in the nt_zone_record_log.

(Sorry, there's currently no callbacks or triggers to automatically kick off an export immediately after update. That's something I plan to add in a future version of NicTool that will likely run under node.js.)

abeeson

Quote from: matt on May 02, 2014, 07:30:09 am
Hello abeeson,

As I understand it, you are going to create and delete zones in NicTool, ignore their export (b/c puppet already created them), and then publish zone record updates via the nsupdate mechanism.

It seems the "most likely to be usable by others"  approach would be to create a new export class (perhaps NicToolServer::Export::BIND::nsupdate) with a configurable set of export objects that you want to suppress (z.add, z.del, zr.ptr, etc.).  Then you could have a fully working export mechanism for a NS via your export class, as well as a customized one for your puppet integration whose changes are merely config objects.

One issue you'll need to resolve is that the NicTool DB schema only updates timestamps on modified *zones*. All the present export mechanisms do a full export for any zone with changes. You wish to only export zone record updates, so your shortest path to finding zr updates might be spelunking in the nt_zone_record_log.

(Sorry, there's currently no callbacks or triggers to automatically kick off an export immediately after update. That's something I plan to add in a future version of NicTool that will likely run under node.js.)


Cheers Matt!

That is exactly what we had in mind, i had a feeling the update watching might be the issue but thats ok, i'll check out the nt_zone_record_log now that i have an install run up and ready to go  :D

abeeson

Hey Matt,

I have been looking into the nsupdate export module for the last day or two, i have forked the git repo and submitted some changes up to mine to get a basic module into nictool (with the associated changes for the client and server code to see and allow using it)

I'm not at the process of trying to export the zone changes and i have hit a point where i'm hoping you might have some input.

I need to pull only the recent changes so as you said i had a look at the nt_zone_record_log table and it was good but didnt contain one crucial detail - Updates to IP's / names including the old entry.

For cleanup reasons i need to have that to delete the old entry then create the new one, for that reason I have now moved onto the nt_user_global_log as it keeps a much more detailed description however before i go much further i had a few design questions.

Looking through your code it looks like you have tried to use API calls as much as possible to avoid duplication of code etc which i think is a great idea, but i'm trying to find a way to pull just the last x minutes of logs (lets call it 30) and i cant see a way to do this with the API calls directly.

The SQL i have so far is: select * from nt_user_global_log where timestamp > $time ($time set earlier to = time-1800 for now) and i'm using the passed dbix_w handle to make the calls. (i'll likely drop that down to 5 minutes or faster and run the cron on that schedule)

I'm sure there is a way to make that call work using your standard subs (there may not be a call however in the server for get_global_aplication_log, i havent gotten far enough to check that in the server) but for the life of me i dont seem to be able to get the query in such a way that it will return only the latest entries....

I'm also trying to come up with a way to have the Export code check this before exporting but without modifying the base.pm / export.pm i'm struggling to come up with a neat way. If you have any suggestions i'd love to hear them, i'm trying to make this as modular as possible while adhering to your current design without any dodgy code hacks.

matt

QuoteLooking through your code it looks like you have tried to use API calls as much as possible to avoid duplication of code etc which i think is a great idea, but i'm trying to find a way to pull just the last x minutes of logs (lets call it 30) and i cant see a way to do this with the API calls directly.


If there isn't a way to get what you need with the API, they go and get it with SQL. :-)  You don't have to look very far in the Export.pm class to see that I had to add quite a bit of SQL.

QuoteThe SQL i have so far is: select * from nt_user_global_log where timestamp > $time ($time set earlier to = time-1800 for now) and i'm using the passed dbix_w handle to make the calls. (i'll likely drop that down to 5 minutes or faster and run the cron on that schedule)


You don't want to do a ($time set earlier to = time-1800) type set, you instead want to store the last export timestamp in nt_nameserver_export_log.date_start. Then, when you start the next export, you pull all changes since your last *successful* export (see get_last_ns_export). IE, something like this:

SELECT * FROM nictool.nt_user_global_log WHERE timestamp > (SELECT UNIX_TIMESTAMP(date_start) FROM nt_nameserver_export_log WHERE success=1 AND nt_nameserver_id=3 ORDER BY date_start DESC LIMIT 1)

That will get you just the things that have changed since the last successful export. For testing, you'll probably want to create a fake export that succeeded, make some changes, and then verify that you're consistently getting the same set of exports items (until the next successful export).

You'll probably notice that query also returns user, group, and nameserver changes too. You probably don't want those, so you can omit them by updating the query:

SELECT * FROM nictool.nt_user_global_log WHERE timestamp > (SELECT UNIX_TIMESTAMP(date_start) FROM nt_nameserver_export_log WHERE success=1 AND nt_nameserver_id=3 ORDER BY date_start DESC LIMIT 1) AND object IN ('zone','zone_record')[code]

abeeson

Cheers Matt that is awesome, i'll check them all out now.

I had actually wanted to suppress the per zone updates that are launched from Base.pm as well but now that i have looked at it, i want to keep them. The output i basically have now is an nsupdate.log which could be injected per run, as well as individual zone files which could be used to "kickstart" a blank name server to have all the records.

I'll look at the SQL changes now, i have a few use cases i know i have not catered for so i'll need to check those out too.

I'll look into adjusting the timestamp pull as well, that would be awesome for not constantly attempting to push blank files as well...

abeeson

OK so thanks to your SQL suggestions and some other work i have this much closer to completed now, its all functioning as expected though i still need to test generation of specific entries that i dont have in there yet.

I'll update my git clone with it and when i'm happy with it i'll put a discussion / pull request up :)

abeeson

Thanks to Matt's help this is now pulled into the main git for Nictool and should be in future installs as its own export option :)

For anybody interested, getting keys to encrypt the updates in requires a slight edit to add the key file, though i'll look at trying to get that somewhere more generic like the config etc.