Jump to content

fffeee

Members
  • Content count

    105
  • Joined

  • Last visited

  1. Can't speak to all of this but I use Arq from Haystack Software for encrypted de-duped off-site backups to Amazon S3, Google Nearline, Amazon Drive and folders on local media or my NAS. It also supports sftp targets. The licensing is easy, and there is an open source recovery tool should the software ever stop being maintained or whatever written in python on Github. You can configure it quite granularly, it supports all macOS/HFS+ metadata and restores it perfectly. You can also create different backup sets on different destinations, and you can do snapshot-style hourly incremental backups that age out to daily, weekly, monthly, yearly. I keep some stuff in Amazon S3 Glacier, others in Google Nearline and some on regular S3; my current working documents are archived nightly off-site but I do hourly snapshots locally. My NAS at home is FreeNAS and I was a Solaris admin in a former life, so I prefer to use ZFS for redundant/reliable storage. There is OpenZFS on OS X which is a very good implementation for Macs and I use it on an iMac 5K Retina 27" and MacBook Pro at work with USB 3 devices and on my Hac at home with Thunderbolt/eSATA and occasionally USB 3 drives (usually just to duplicate or send/recv a pool snapshot) You can't boot from ZFS but that doesn't bother me much personally; I am fine with jhfs+ for a system and application volume and just having my important stuff on ZFS or ZFS-backed jhfs+ zvols is safe enough for me. The TimeMachine service in OS X Server is pretty good; you can target ZFS datasets or zvols and easily enough recover a MacBook Pro's backup to a fresh disk. People do it all the time, I tested it once with one of the household MacBook Airs and it was without incident. The other nice thing about OS X Server is that goddamned caching server for iCloud/AppStore. It's awesome if you have a household of Macs and iOS devices. The software update service can easily balloon if you set it to defaults so I selectively manually fetch large OS releases and safebrowsing updates and things like that but don't bother mirroring all the voices and other one-off items that aren't used by every single device in the house. On the Pro you have one good option for storage; Thunderbolt. Think about what filesystem you want to use, think about if you'll be early adopter of APFS or if you end to use Core Storage. Don't think you're going to use AppleRAID or something because there are already better options and it's likely to be pulled at some point in the near future. I prefer software like ZFS to handle devices but you can find Thunderbolt hardware RAID enclosures out there. I stick to JBODs or enclosures I can present the devices over those just because I hate hardware RAID unless I have the budget and requirements for single-vendor (e.g. spending more than USD$200k with someone like HP/NetApp/Oracle) — for my personal use I'm going to be using commodity hardware piecemeal and it needs to adapt and change over time or I end up being trapped in some legacy universe where I demand a PS/2 port, Snow Leopard, or a floppy drive to go along with my powdered wig and buggy whips.
  2. fffeee

    Overview of all possible Clover boot flags

    Not all. Just most.
  3. fffeee

    Overview of all possible Clover boot flags

    The best way to see what currently allowed boot arguments XNU has is to grab the code for the most recently released version of XNU on Apple's repository. Once you have it cloned, `grep`(1) through the code for: PE_parse_boot_arg and you'll have most of them. It's pointless to generate a list of them since more are added, so generating a list of current ones is the best way to answer the question.
  4. fffeee

    Remap keyboard

    I use software called Seil to handle my Caps Lock key and Karabiner to remap everything else. You'll love it.
  5. I like his idea. The device limit will likely be a problem for anyone with several devices unless you're really selective in how you implement them — anyone with USB 3 hub devices that have more than 4 ports have probably encountered this since USB 3 was a thing on Macs — I have one in particular that is actually three hub devices in one device and it ends up causing all sorts of problems for every other USB device I have and the OS barfs messages to syslog about being unable to enumerate additional devices, etc -- first time it got me was when I was using a pair of JBODs, really uncool. I found an interesting write-up/explanation at the time that may be interesting to some — http://apple.stackexchange.com/questions/120777/2012-macbook-air-usb-hardware-ran-out-of-device-slots the tl;dr was to use 4-port hubs only because 7-port hubs are two in a chain. Some other interesting USB-related shenanigans were discussed as well.
  6. fffeee

    Clover Bug/Issue Report and Patch

    FWIW my Asus Z97 Gryphon uses `HDAU` and I edit Toleda's script to replace occurrences of `HDEF` with `HDAU` instead and my audio would work fine, but I'm using an SSDT-based method of configuring devices (via RampageDev's method)
  7. fffeee

    Clover Bug/Issue Report and Patch

    Which is more likely: your USB ports are working because of a kernel extension and SSDT change you made, or the addition of a completely unrelated DSDT patch mask to Clover that adds a method to _PTS at shutdown?
  8. The post above yours has a possible solution.
  9. fffeee

    Clover Bug/Issue Report and Patch

    Any chance I could eyeball your config and see what I'm doing wrong? I'd hate to think it's my zfs pool confusing things; that would be a big problem for me. I'm not trying to do anything stupid like boot off it or anything so I don't think it's related. No longer desired, issue was with my SSDT-1 that I believe Toleda's script created? At any rate, removing that file and it's fine. I went back to Voodoo HD for my audio since I only use USB devices and SPDIF anyway.
  10. fffeee

    Clover Bug/Issue Report and Patch

    Clover 3272 cannot boot my Z97 Gryphon workstation, but I am successfully booting it with Clover 3229. It hasn't been bootable with newer builds for quite a while but I'd like to get it squared away before release of 10.11 if I can. I'm using the same config.plist, but this is a photo of the display when I boot into verbose mode with 3272. Symptoms without verbose mode are just a status bar under the logo that appears stuck indefinitely. I have a zpool with an SSD for zil and l2arc installed on this workstation in addition to the OS and applications SSD which is HFS+. One thing I found puzzling is that: ## 3272 2:319 0:000 found 39 volumes with blockIO while ## 3229 3:330 0:000 found 34 volumes with blockIO Both Clover instances arrive at the Clover GUI correctly and default to the correct boot volume and successfully (apparently insert nvram values and EFI payloads.) Both get my boot variables as well. Mostly. My config has: <string>slide=0 darkwake=10 nvda_drv=1 kext-dev-mode=1 keepsyms=y</string> But both record in preboot: ## 3272 2:608 0:000 found boot-args in NVRAM:kext-dev-mode=1 keepsyms=y nvda_drv=1, size=37 ## 3329 3:444 0:000 found boot-args in NVRAM:kext-dev-mode=1 keepsyms=y nvda_drv=1, size=37 So not everything is making it into nvram? My config.plist is available on pastebin. I'm not opposed to spamming pastebin urls of my preboot logs if it's relevant, I just don't know how to proceed debugging on my own at this point. Note: I use RampageDev's SSDT on this workstation and apply Clover-centric fixes for the audio device but I use SPDIF into my audio equipment for digital audio.
  11. Avoid the ambiguity entirely and use iperf. I have three network interfaces in my workstation and did a few 10-second tests over each and they all look similar to this: [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 1.08 GBytes 927 Mbits/sec # Yukon interface [ 5] 0.0-10.0 sec 706 MBytes 592 Mbits/sec # AirPort Extreme interface [ 4] 0.0-10.0 sec 1.09 GBytes 934 Mbits/sec # IntelMausi interface So around 115-116MB/s. That Yukon card is a relic, but is very reliable. (Edit: the iperf server in my test is on the same physical network as the client and connected via gigabit ethernet to it's truly horrible RealTek interface on a FreeBSD system)
  12. VMWare Fusion's network extensions are in /Applications/VMware\ Fusion.app/Contents/Library/kexts, and they're not running until a VM is started. It uses their natd for network configurations that use NAT, and creates vmnet[n] network interfaces it uses for bridged/nat/private networks. At a glance it looks like the same natd they use on ESXi. Their vm-net tools do you let up/down and create/destroy interfaces ad-hoc but you are correct that the kexts stay loaded (but without anything to do if the interfaces are destroyed). If the presence of the extensions alone is a possible culprit in spite of not having interfaces to manage, that would be rather interesting.
  13. fffeee

    Clover General discussion

    You could have stopped there. Read that file.
  14. Any chance you've got stale {censored} laying around in the lockdown folder1? When I've had inconsistent sync or device connections on my Hac my first steps are to blow that folder's contents away (devices will re-prompt for "Do you trust this computer") and disable Sync This Device over WiFi in iTunes because it pollutes libimobiledevice too much (libimobiledevice tools will not know which device you want to talk to when they're all phoning in over WiFi to usbmuxd). 1: The "lockdown folder" is `/var/db/lockdown` and has the device certs for devices that form trust relationships with that system.
  15. Kind of, I use btsync to sync a couple of directories to my NAS, Hac desktop and rMBP, so fewer than half a dozen peers. My resource/descriptors issue hasn't occurred recently, using d3 driver. I need to raise some of the values for network and files/memory on my workstation anyway from defaults, it's possible some people getting stalled messages or having performance issues may want to consider some of my values; ### kernel tuning # i have 32GB of memory but i also use zfs. # some of this tuning is in response to bottlenecking i've # encountered. # kern.maxfiles=280000 kern.maxfilesperproc=20480 # kern.num_files=24270 kern.maxvnodes=280000 # i'm almost always using every single one of those ^ kern.maxproc=2500 kern.maxprocperuid=2400 kern.ipc.somaxconn=2500 kern.maxnbuf=60000 kern.num_taskthreads=2560 ### if the OS X default of 3 is not big enough # # net.inet.tcp.win_scale_factor=4 ### increase OS X TCP autotuning maximums # # net.inet.tcp.autorcvbufmax=16777216 # net.inet.tcp.autosndbufmax=16777216 ### stack tuning # # net.inet.tcp.v6mssdflt=1428 net.inet.tcp.mssdflt=1448 net.inet.tcp.v6mssdflt=1412 net.inet.tcp.msl=15000 net.inet.tcp.always_keepalive=0 net.inet.tcp.delayed_ack=3 net.inet.tcp.slowstart_flightsize=20 net.inet.tcp.local_slowstart_flightsize=9 net.inet.tcp.blackhole=2 net.inet.udp.blackhole=1 ### adjusting send/recv space # net.inet.tcp.sendspace=1042560 net.inet.tcp.recvspace=1042560 At home I don't have native ipv6 and use a tunnel from Hurricane Electric. I've been making some adjustments for that tunnel that may or may not be relevant (to anyone). --- Footnote While loginwindow doesn't depend on en0, but what was happening was because en0 wasn't there and my other interface was, the system hostname changed, and there was an install or update that was unable to get focus above loginwindow without blocking display of both for whatever reason. Found a couple of people on Apple support forums that had the same issue where loginwindow is just a black screen and a cursor, you can circumvent by blindly logging in (workstations that don't require username and password can use the tab key and type the first letter of username shown to get it highlighted (even though you can't see it) and hit enter and type your password and hit enter and then finally see the dialog, or you can do the nuclear option and blow away the contents of /var/folders where AppStore or softwareupdate or whatever stashed the "now do this on boot" cruft that is holding you up.) Haven't had en0/intel-onboard fail lately so maybe the two were related.
×